00:00:00.001 Started by upstream project "autotest-per-patch" build number 126144 00:00:00.002 originally caused by: 00:00:00.002 Started by user sys_sgci 00:00:00.039 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.039 The recommended git tool is: git 00:00:00.039 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.066 Fetching changes from the remote Git repository 00:00:00.069 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.111 Using shallow fetch with depth 1 00:00:00.111 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.111 > git --version # timeout=10 00:00:00.171 > git --version # 'git version 2.39.2' 00:00:00.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.224 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.224 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.289 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.302 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.315 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:05.315 > git config core.sparsecheckout # timeout=10 00:00:05.327 > git read-tree -mu HEAD # timeout=10 00:00:05.345 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:05.364 Commit message: "inventory: add WCP3 to free inventory" 00:00:05.364 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:05.446 [Pipeline] Start of Pipeline 00:00:05.457 [Pipeline] library 00:00:05.458 Loading library shm_lib@master 00:00:05.458 Library shm_lib@master is cached. Copying from home. 00:00:05.471 [Pipeline] node 00:00:05.478 Running on CYP9 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.480 [Pipeline] { 00:00:05.491 [Pipeline] catchError 00:00:05.493 [Pipeline] { 00:00:05.505 [Pipeline] wrap 00:00:05.512 [Pipeline] { 00:00:05.518 [Pipeline] stage 00:00:05.519 [Pipeline] { (Prologue) 00:00:05.682 [Pipeline] sh 00:00:05.964 + logger -p user.info -t JENKINS-CI 00:00:05.984 [Pipeline] echo 00:00:05.986 Node: CYP9 00:00:05.994 [Pipeline] sh 00:00:06.297 [Pipeline] setCustomBuildProperty 00:00:06.309 [Pipeline] echo 00:00:06.310 Cleanup processes 00:00:06.316 [Pipeline] sh 00:00:06.602 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.602 1088496 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.616 [Pipeline] sh 00:00:06.904 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.904 ++ grep -v 'sudo pgrep' 00:00:06.904 ++ awk '{print $1}' 00:00:06.904 + sudo kill -9 00:00:06.904 + true 00:00:06.935 [Pipeline] cleanWs 00:00:06.944 [WS-CLEANUP] Deleting project workspace... 00:00:06.944 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.950 [WS-CLEANUP] done 00:00:06.953 [Pipeline] setCustomBuildProperty 00:00:06.962 [Pipeline] sh 00:00:07.293 + sudo git config --global --replace-all safe.directory '*' 00:00:07.390 [Pipeline] httpRequest 00:00:07.427 [Pipeline] echo 00:00:07.428 Sorcerer 10.211.164.101 is alive 00:00:07.435 [Pipeline] httpRequest 00:00:07.439 HttpMethod: GET 00:00:07.440 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.440 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.464 Response Code: HTTP/1.1 200 OK 00:00:07.464 Success: Status code 200 is in the accepted range: 200,404 00:00:07.464 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:18.325 [Pipeline] sh 00:00:18.613 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:18.631 [Pipeline] httpRequest 00:00:18.663 [Pipeline] echo 00:00:18.665 Sorcerer 10.211.164.101 is alive 00:00:18.674 [Pipeline] httpRequest 00:00:18.679 HttpMethod: GET 00:00:18.680 URL: http://10.211.164.101/packages/spdk_2945695e6f7677014e675cdb9965e1157e878c14.tar.gz 00:00:18.681 Sending request to url: http://10.211.164.101/packages/spdk_2945695e6f7677014e675cdb9965e1157e878c14.tar.gz 00:00:18.699 Response Code: HTTP/1.1 200 OK 00:00:18.700 Success: Status code 200 is in the accepted range: 200,404 00:00:18.701 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_2945695e6f7677014e675cdb9965e1157e878c14.tar.gz 00:01:57.868 [Pipeline] sh 00:01:58.161 + tar --no-same-owner -xf spdk_2945695e6f7677014e675cdb9965e1157e878c14.tar.gz 00:02:00.756 [Pipeline] sh 00:02:01.043 + git -C spdk log --oneline -n5 00:02:01.043 2945695e6 env: pack and assert size for spdk_env_opts 00:02:01.043 2f837bd56 sock: add spdk_sock_get_numa_socket_id 00:02:01.043 349cae072 sock: add spdk_sock_get_interface_name 00:02:01.043 a08c41a25 build: fix unit test builds that directly use env_dpdk 00:02:01.043 70df434d8 util: allow NULL saddr/caddr for spdk_net_getaddr 00:02:01.059 [Pipeline] } 00:02:01.081 [Pipeline] // stage 00:02:01.091 [Pipeline] stage 00:02:01.094 [Pipeline] { (Prepare) 00:02:01.115 [Pipeline] writeFile 00:02:01.133 [Pipeline] sh 00:02:01.420 + logger -p user.info -t JENKINS-CI 00:02:01.434 [Pipeline] sh 00:02:01.721 + logger -p user.info -t JENKINS-CI 00:02:01.735 [Pipeline] sh 00:02:02.022 + cat autorun-spdk.conf 00:02:02.022 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.022 SPDK_TEST_NVMF=1 00:02:02.022 SPDK_TEST_NVME_CLI=1 00:02:02.022 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.022 SPDK_TEST_NVMF_NICS=e810 00:02:02.022 SPDK_TEST_VFIOUSER=1 00:02:02.022 SPDK_RUN_UBSAN=1 00:02:02.022 NET_TYPE=phy 00:02:02.030 RUN_NIGHTLY=0 00:02:02.035 [Pipeline] readFile 00:02:02.063 [Pipeline] withEnv 00:02:02.065 [Pipeline] { 00:02:02.080 [Pipeline] sh 00:02:02.369 + set -ex 00:02:02.369 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:02.369 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:02.369 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.369 ++ SPDK_TEST_NVMF=1 00:02:02.369 ++ SPDK_TEST_NVME_CLI=1 00:02:02.369 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:02.369 ++ SPDK_TEST_NVMF_NICS=e810 00:02:02.369 ++ SPDK_TEST_VFIOUSER=1 00:02:02.369 ++ SPDK_RUN_UBSAN=1 00:02:02.369 ++ NET_TYPE=phy 00:02:02.369 ++ RUN_NIGHTLY=0 00:02:02.369 + case $SPDK_TEST_NVMF_NICS in 00:02:02.369 + DRIVERS=ice 00:02:02.369 + [[ tcp == \r\d\m\a ]] 00:02:02.369 + [[ -n ice ]] 00:02:02.369 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:02.369 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:02.369 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:02.369 rmmod: ERROR: Module irdma is not currently loaded 00:02:02.369 rmmod: ERROR: Module i40iw is not currently loaded 00:02:02.369 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:02.369 + true 00:02:02.369 + for D in $DRIVERS 00:02:02.369 + sudo modprobe ice 00:02:02.369 + exit 0 00:02:02.380 [Pipeline] } 00:02:02.400 [Pipeline] // withEnv 00:02:02.406 [Pipeline] } 00:02:02.423 [Pipeline] // stage 00:02:02.434 [Pipeline] catchError 00:02:02.436 [Pipeline] { 00:02:02.450 [Pipeline] timeout 00:02:02.450 Timeout set to expire in 50 min 00:02:02.451 [Pipeline] { 00:02:02.464 [Pipeline] stage 00:02:02.466 [Pipeline] { (Tests) 00:02:02.481 [Pipeline] sh 00:02:02.767 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:02.767 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:02.767 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:02.767 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:02.767 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:02.767 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:02.767 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:02.767 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:02.767 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:02.767 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:02.767 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:02.767 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:02.767 + source /etc/os-release 00:02:02.767 ++ NAME='Fedora Linux' 00:02:02.767 ++ VERSION='38 (Cloud Edition)' 00:02:02.767 ++ ID=fedora 00:02:02.767 ++ VERSION_ID=38 00:02:02.767 ++ VERSION_CODENAME= 00:02:02.767 ++ PLATFORM_ID=platform:f38 00:02:02.767 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:02.767 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:02.767 ++ LOGO=fedora-logo-icon 00:02:02.767 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:02.767 ++ HOME_URL=https://fedoraproject.org/ 00:02:02.767 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:02.767 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:02.767 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:02.767 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:02.767 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:02.767 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:02.767 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:02.767 ++ SUPPORT_END=2024-05-14 00:02:02.767 ++ VARIANT='Cloud Edition' 00:02:02.767 ++ VARIANT_ID=cloud 00:02:02.767 + uname -a 00:02:02.767 Linux spdk-cyp-09 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:02.767 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:05.308 Hugepages 00:02:05.308 node hugesize free / total 00:02:05.308 node0 1048576kB 0 / 0 00:02:05.308 node0 2048kB 0 / 0 00:02:05.308 node1 1048576kB 0 / 0 00:02:05.308 node1 2048kB 0 / 0 00:02:05.308 00:02:05.308 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:05.308 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:02:05.308 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:02:05.308 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:02:05.308 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:02:05.308 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:02:05.308 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:02:05.308 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:02:05.308 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:02:05.308 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:02:05.308 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:02:05.308 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:02:05.308 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:02:05.308 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:02:05.308 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:02:05.308 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:02:05.308 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:02:05.308 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:02:05.308 + rm -f /tmp/spdk-ld-path 00:02:05.308 + source autorun-spdk.conf 00:02:05.308 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.308 ++ SPDK_TEST_NVMF=1 00:02:05.308 ++ SPDK_TEST_NVME_CLI=1 00:02:05.308 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.308 ++ SPDK_TEST_NVMF_NICS=e810 00:02:05.308 ++ SPDK_TEST_VFIOUSER=1 00:02:05.308 ++ SPDK_RUN_UBSAN=1 00:02:05.308 ++ NET_TYPE=phy 00:02:05.308 ++ RUN_NIGHTLY=0 00:02:05.308 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:05.308 + [[ -n '' ]] 00:02:05.308 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:05.308 + for M in /var/spdk/build-*-manifest.txt 00:02:05.308 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:05.308 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.308 + for M in /var/spdk/build-*-manifest.txt 00:02:05.308 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:05.308 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:05.308 ++ uname 00:02:05.308 + [[ Linux == \L\i\n\u\x ]] 00:02:05.308 + sudo dmesg -T 00:02:05.569 + sudo dmesg --clear 00:02:05.569 + dmesg_pid=1090034 00:02:05.569 + [[ Fedora Linux == FreeBSD ]] 00:02:05.569 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.569 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:05.569 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:05.569 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:05.569 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:02:05.569 + [[ -x /usr/src/fio-static/fio ]] 00:02:05.569 + export FIO_BIN=/usr/src/fio-static/fio 00:02:05.569 + FIO_BIN=/usr/src/fio-static/fio 00:02:05.569 + sudo dmesg -Tw 00:02:05.569 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:05.569 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:05.569 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:05.569 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.569 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:05.569 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:05.569 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.569 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:05.569 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:05.569 Test configuration: 00:02:05.569 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:05.569 SPDK_TEST_NVMF=1 00:02:05.569 SPDK_TEST_NVME_CLI=1 00:02:05.569 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:05.569 SPDK_TEST_NVMF_NICS=e810 00:02:05.569 SPDK_TEST_VFIOUSER=1 00:02:05.569 SPDK_RUN_UBSAN=1 00:02:05.569 NET_TYPE=phy 00:02:05.569 RUN_NIGHTLY=0 18:58:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:05.569 18:58:11 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:05.569 18:58:11 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:05.569 18:58:11 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:05.569 18:58:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.569 18:58:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.569 18:58:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.569 18:58:11 -- paths/export.sh@5 -- $ export PATH 00:02:05.569 18:58:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:05.569 18:58:11 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:05.569 18:58:11 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:05.569 18:58:11 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720803491.XXXXXX 00:02:05.569 18:58:11 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720803491.RszySd 00:02:05.569 18:58:11 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:05.569 18:58:11 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:05.569 18:58:11 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:02:05.569 18:58:11 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:05.569 18:58:11 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:05.569 18:58:11 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:05.569 18:58:11 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:05.569 18:58:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:05.569 18:58:11 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:02:05.570 18:58:11 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:05.570 18:58:11 -- pm/common@17 -- $ local monitor 00:02:05.570 18:58:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.570 18:58:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.570 18:58:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.570 18:58:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:05.570 18:58:11 -- pm/common@21 -- $ date +%s 00:02:05.570 18:58:11 -- pm/common@25 -- $ sleep 1 00:02:05.570 18:58:11 -- pm/common@21 -- $ date +%s 00:02:05.570 18:58:11 -- pm/common@21 -- $ date +%s 00:02:05.570 18:58:11 -- pm/common@21 -- $ date +%s 00:02:05.570 18:58:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720803491 00:02:05.570 18:58:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720803491 00:02:05.570 18:58:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720803491 00:02:05.570 18:58:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720803491 00:02:05.570 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720803491_collect-vmstat.pm.log 00:02:05.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720803491_collect-cpu-load.pm.log 00:02:05.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720803491_collect-cpu-temp.pm.log 00:02:05.830 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720803491_collect-bmc-pm.bmc.pm.log 00:02:06.768 18:58:12 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:06.769 18:58:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:06.769 18:58:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:06.769 18:58:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:06.769 18:58:12 -- spdk/autobuild.sh@16 -- $ date -u 00:02:06.769 Fri Jul 12 04:58:12 PM UTC 2024 00:02:06.769 18:58:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:06.769 v24.09-pre-215-g2945695e6 00:02:06.769 18:58:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:06.769 18:58:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:06.769 18:58:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:06.769 18:58:12 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:06.769 18:58:12 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:06.769 18:58:12 -- common/autotest_common.sh@10 -- $ set +x 00:02:06.769 ************************************ 00:02:06.769 START TEST ubsan 00:02:06.769 ************************************ 00:02:06.769 18:58:12 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:02:06.769 using ubsan 00:02:06.769 00:02:06.769 real 0m0.001s 00:02:06.769 user 0m0.000s 00:02:06.769 sys 0m0.000s 00:02:06.769 18:58:12 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:06.769 18:58:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:06.769 ************************************ 00:02:06.769 END TEST ubsan 00:02:06.769 ************************************ 00:02:06.769 18:58:12 -- common/autotest_common.sh@1142 -- $ return 0 00:02:06.769 18:58:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:06.769 18:58:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:06.769 18:58:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:06.769 18:58:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:06.769 18:58:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:06.769 18:58:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:06.769 18:58:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:06.769 18:58:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:06.769 18:58:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:02:07.027 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:07.027 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:07.287 Using 'verbs' RDMA provider 00:02:23.131 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:35.367 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:35.367 Creating mk/config.mk...done. 00:02:35.367 Creating mk/cc.flags.mk...done. 00:02:35.367 Type 'make' to build. 00:02:35.367 18:58:40 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:02:35.367 18:58:40 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:35.367 18:58:40 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:35.367 18:58:40 -- common/autotest_common.sh@10 -- $ set +x 00:02:35.367 ************************************ 00:02:35.367 START TEST make 00:02:35.367 ************************************ 00:02:35.367 18:58:40 make -- common/autotest_common.sh@1123 -- $ make -j144 00:02:35.367 make[1]: Nothing to be done for 'all'. 00:02:36.313 The Meson build system 00:02:36.313 Version: 1.3.1 00:02:36.313 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:36.313 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:36.313 Build type: native build 00:02:36.313 Project name: libvfio-user 00:02:36.313 Project version: 0.0.1 00:02:36.313 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:36.313 C linker for the host machine: cc ld.bfd 2.39-16 00:02:36.313 Host machine cpu family: x86_64 00:02:36.313 Host machine cpu: x86_64 00:02:36.313 Run-time dependency threads found: YES 00:02:36.313 Library dl found: YES 00:02:36.313 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:36.313 Run-time dependency json-c found: YES 0.17 00:02:36.313 Run-time dependency cmocka found: YES 1.1.7 00:02:36.313 Program pytest-3 found: NO 00:02:36.313 Program flake8 found: NO 00:02:36.313 Program misspell-fixer found: NO 00:02:36.313 Program restructuredtext-lint found: NO 00:02:36.313 Program valgrind found: YES (/usr/bin/valgrind) 00:02:36.313 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:36.313 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:36.313 Compiler for C supports arguments -Wwrite-strings: YES 00:02:36.313 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.313 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:36.313 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:36.313 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:36.313 Build targets in project: 8 00:02:36.313 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:36.313 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:36.313 00:02:36.313 libvfio-user 0.0.1 00:02:36.313 00:02:36.313 User defined options 00:02:36.313 buildtype : debug 00:02:36.313 default_library: shared 00:02:36.313 libdir : /usr/local/lib 00:02:36.313 00:02:36.313 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:36.888 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:36.888 [1/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:36.888 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:36.888 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:36.888 [4/37] Compiling C object samples/null.p/null.c.o 00:02:36.888 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:36.888 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:36.888 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:36.888 [8/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:36.888 [9/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:36.888 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:36.888 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:36.888 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:36.888 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:36.888 [14/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:36.888 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:36.888 [16/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:36.888 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:36.888 [18/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:36.888 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:36.888 [20/37] Compiling C object samples/client.p/client.c.o 00:02:36.888 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:36.888 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:36.888 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:36.888 [24/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:36.888 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:36.888 [26/37] Compiling C object samples/server.p/server.c.o 00:02:36.888 [27/37] Linking target samples/client 00:02:36.888 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:36.888 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:36.888 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:37.151 [31/37] Linking target test/unit_tests 00:02:37.151 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:37.151 [33/37] Linking target samples/shadow_ioeventfd_server 00:02:37.151 [34/37] Linking target samples/null 00:02:37.151 [35/37] Linking target samples/lspci 00:02:37.151 [36/37] Linking target samples/server 00:02:37.151 [37/37] Linking target samples/gpio-pci-idio-16 00:02:37.151 INFO: autodetecting backend as ninja 00:02:37.151 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:37.151 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:37.721 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:37.721 ninja: no work to do. 00:02:44.403 The Meson build system 00:02:44.403 Version: 1.3.1 00:02:44.403 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:44.403 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:44.403 Build type: native build 00:02:44.403 Program cat found: YES (/usr/bin/cat) 00:02:44.403 Project name: DPDK 00:02:44.403 Project version: 24.03.0 00:02:44.403 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:44.403 C linker for the host machine: cc ld.bfd 2.39-16 00:02:44.403 Host machine cpu family: x86_64 00:02:44.403 Host machine cpu: x86_64 00:02:44.403 Message: ## Building in Developer Mode ## 00:02:44.403 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:44.403 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:44.403 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:44.403 Program python3 found: YES (/usr/bin/python3) 00:02:44.403 Program cat found: YES (/usr/bin/cat) 00:02:44.403 Compiler for C supports arguments -march=native: YES 00:02:44.403 Checking for size of "void *" : 8 00:02:44.403 Checking for size of "void *" : 8 (cached) 00:02:44.403 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:44.403 Library m found: YES 00:02:44.403 Library numa found: YES 00:02:44.403 Has header "numaif.h" : YES 00:02:44.403 Library fdt found: NO 00:02:44.403 Library execinfo found: NO 00:02:44.403 Has header "execinfo.h" : YES 00:02:44.403 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:44.403 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:44.403 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:44.403 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:44.403 Run-time dependency openssl found: YES 3.0.9 00:02:44.403 Run-time dependency libpcap found: YES 1.10.4 00:02:44.403 Has header "pcap.h" with dependency libpcap: YES 00:02:44.403 Compiler for C supports arguments -Wcast-qual: YES 00:02:44.403 Compiler for C supports arguments -Wdeprecated: YES 00:02:44.403 Compiler for C supports arguments -Wformat: YES 00:02:44.403 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:44.403 Compiler for C supports arguments -Wformat-security: NO 00:02:44.403 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:44.403 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:44.403 Compiler for C supports arguments -Wnested-externs: YES 00:02:44.403 Compiler for C supports arguments -Wold-style-definition: YES 00:02:44.403 Compiler for C supports arguments -Wpointer-arith: YES 00:02:44.403 Compiler for C supports arguments -Wsign-compare: YES 00:02:44.403 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:44.403 Compiler for C supports arguments -Wundef: YES 00:02:44.403 Compiler for C supports arguments -Wwrite-strings: YES 00:02:44.403 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:44.403 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:44.403 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:44.403 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:44.403 Program objdump found: YES (/usr/bin/objdump) 00:02:44.403 Compiler for C supports arguments -mavx512f: YES 00:02:44.403 Checking if "AVX512 checking" compiles: YES 00:02:44.403 Fetching value of define "__SSE4_2__" : 1 00:02:44.403 Fetching value of define "__AES__" : 1 00:02:44.403 Fetching value of define "__AVX__" : 1 00:02:44.403 Fetching value of define "__AVX2__" : 1 00:02:44.403 Fetching value of define "__AVX512BW__" : 1 00:02:44.403 Fetching value of define "__AVX512CD__" : 1 00:02:44.403 Fetching value of define "__AVX512DQ__" : 1 00:02:44.403 Fetching value of define "__AVX512F__" : 1 00:02:44.403 Fetching value of define "__AVX512VL__" : 1 00:02:44.403 Fetching value of define "__PCLMUL__" : 1 00:02:44.403 Fetching value of define "__RDRND__" : 1 00:02:44.403 Fetching value of define "__RDSEED__" : 1 00:02:44.403 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:44.403 Fetching value of define "__znver1__" : (undefined) 00:02:44.403 Fetching value of define "__znver2__" : (undefined) 00:02:44.403 Fetching value of define "__znver3__" : (undefined) 00:02:44.403 Fetching value of define "__znver4__" : (undefined) 00:02:44.403 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:44.403 Message: lib/log: Defining dependency "log" 00:02:44.403 Message: lib/kvargs: Defining dependency "kvargs" 00:02:44.403 Message: lib/telemetry: Defining dependency "telemetry" 00:02:44.403 Checking for function "getentropy" : NO 00:02:44.403 Message: lib/eal: Defining dependency "eal" 00:02:44.403 Message: lib/ring: Defining dependency "ring" 00:02:44.403 Message: lib/rcu: Defining dependency "rcu" 00:02:44.403 Message: lib/mempool: Defining dependency "mempool" 00:02:44.403 Message: lib/mbuf: Defining dependency "mbuf" 00:02:44.403 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:44.403 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:44.403 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:44.403 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:44.403 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:44.403 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:44.403 Compiler for C supports arguments -mpclmul: YES 00:02:44.403 Compiler for C supports arguments -maes: YES 00:02:44.403 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:44.403 Compiler for C supports arguments -mavx512bw: YES 00:02:44.403 Compiler for C supports arguments -mavx512dq: YES 00:02:44.403 Compiler for C supports arguments -mavx512vl: YES 00:02:44.403 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:44.403 Compiler for C supports arguments -mavx2: YES 00:02:44.403 Compiler for C supports arguments -mavx: YES 00:02:44.403 Message: lib/net: Defining dependency "net" 00:02:44.403 Message: lib/meter: Defining dependency "meter" 00:02:44.403 Message: lib/ethdev: Defining dependency "ethdev" 00:02:44.403 Message: lib/pci: Defining dependency "pci" 00:02:44.403 Message: lib/cmdline: Defining dependency "cmdline" 00:02:44.403 Message: lib/hash: Defining dependency "hash" 00:02:44.403 Message: lib/timer: Defining dependency "timer" 00:02:44.403 Message: lib/compressdev: Defining dependency "compressdev" 00:02:44.403 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:44.403 Message: lib/dmadev: Defining dependency "dmadev" 00:02:44.403 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:44.403 Message: lib/power: Defining dependency "power" 00:02:44.403 Message: lib/reorder: Defining dependency "reorder" 00:02:44.403 Message: lib/security: Defining dependency "security" 00:02:44.403 Has header "linux/userfaultfd.h" : YES 00:02:44.403 Has header "linux/vduse.h" : YES 00:02:44.403 Message: lib/vhost: Defining dependency "vhost" 00:02:44.403 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:44.403 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:44.403 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:44.403 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:44.403 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:44.403 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:44.404 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:44.404 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:44.404 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:44.404 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:44.404 Program doxygen found: YES (/usr/bin/doxygen) 00:02:44.404 Configuring doxy-api-html.conf using configuration 00:02:44.404 Configuring doxy-api-man.conf using configuration 00:02:44.404 Program mandb found: YES (/usr/bin/mandb) 00:02:44.404 Program sphinx-build found: NO 00:02:44.404 Configuring rte_build_config.h using configuration 00:02:44.404 Message: 00:02:44.404 ================= 00:02:44.404 Applications Enabled 00:02:44.404 ================= 00:02:44.404 00:02:44.404 apps: 00:02:44.404 00:02:44.404 00:02:44.404 Message: 00:02:44.404 ================= 00:02:44.404 Libraries Enabled 00:02:44.404 ================= 00:02:44.404 00:02:44.404 libs: 00:02:44.404 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:44.404 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:44.404 cryptodev, dmadev, power, reorder, security, vhost, 00:02:44.404 00:02:44.404 Message: 00:02:44.404 =============== 00:02:44.404 Drivers Enabled 00:02:44.404 =============== 00:02:44.404 00:02:44.404 common: 00:02:44.404 00:02:44.404 bus: 00:02:44.404 pci, vdev, 00:02:44.404 mempool: 00:02:44.404 ring, 00:02:44.404 dma: 00:02:44.404 00:02:44.404 net: 00:02:44.404 00:02:44.404 crypto: 00:02:44.404 00:02:44.404 compress: 00:02:44.404 00:02:44.404 vdpa: 00:02:44.404 00:02:44.404 00:02:44.404 Message: 00:02:44.404 ================= 00:02:44.404 Content Skipped 00:02:44.404 ================= 00:02:44.404 00:02:44.404 apps: 00:02:44.404 dumpcap: explicitly disabled via build config 00:02:44.404 graph: explicitly disabled via build config 00:02:44.404 pdump: explicitly disabled via build config 00:02:44.404 proc-info: explicitly disabled via build config 00:02:44.404 test-acl: explicitly disabled via build config 00:02:44.404 test-bbdev: explicitly disabled via build config 00:02:44.404 test-cmdline: explicitly disabled via build config 00:02:44.404 test-compress-perf: explicitly disabled via build config 00:02:44.404 test-crypto-perf: explicitly disabled via build config 00:02:44.404 test-dma-perf: explicitly disabled via build config 00:02:44.404 test-eventdev: explicitly disabled via build config 00:02:44.404 test-fib: explicitly disabled via build config 00:02:44.404 test-flow-perf: explicitly disabled via build config 00:02:44.404 test-gpudev: explicitly disabled via build config 00:02:44.404 test-mldev: explicitly disabled via build config 00:02:44.404 test-pipeline: explicitly disabled via build config 00:02:44.404 test-pmd: explicitly disabled via build config 00:02:44.404 test-regex: explicitly disabled via build config 00:02:44.404 test-sad: explicitly disabled via build config 00:02:44.404 test-security-perf: explicitly disabled via build config 00:02:44.404 00:02:44.404 libs: 00:02:44.404 argparse: explicitly disabled via build config 00:02:44.404 metrics: explicitly disabled via build config 00:02:44.404 acl: explicitly disabled via build config 00:02:44.404 bbdev: explicitly disabled via build config 00:02:44.404 bitratestats: explicitly disabled via build config 00:02:44.404 bpf: explicitly disabled via build config 00:02:44.404 cfgfile: explicitly disabled via build config 00:02:44.404 distributor: explicitly disabled via build config 00:02:44.404 efd: explicitly disabled via build config 00:02:44.404 eventdev: explicitly disabled via build config 00:02:44.404 dispatcher: explicitly disabled via build config 00:02:44.404 gpudev: explicitly disabled via build config 00:02:44.404 gro: explicitly disabled via build config 00:02:44.404 gso: explicitly disabled via build config 00:02:44.404 ip_frag: explicitly disabled via build config 00:02:44.404 jobstats: explicitly disabled via build config 00:02:44.404 latencystats: explicitly disabled via build config 00:02:44.404 lpm: explicitly disabled via build config 00:02:44.404 member: explicitly disabled via build config 00:02:44.404 pcapng: explicitly disabled via build config 00:02:44.404 rawdev: explicitly disabled via build config 00:02:44.404 regexdev: explicitly disabled via build config 00:02:44.404 mldev: explicitly disabled via build config 00:02:44.404 rib: explicitly disabled via build config 00:02:44.404 sched: explicitly disabled via build config 00:02:44.404 stack: explicitly disabled via build config 00:02:44.404 ipsec: explicitly disabled via build config 00:02:44.404 pdcp: explicitly disabled via build config 00:02:44.404 fib: explicitly disabled via build config 00:02:44.404 port: explicitly disabled via build config 00:02:44.404 pdump: explicitly disabled via build config 00:02:44.404 table: explicitly disabled via build config 00:02:44.404 pipeline: explicitly disabled via build config 00:02:44.404 graph: explicitly disabled via build config 00:02:44.404 node: explicitly disabled via build config 00:02:44.404 00:02:44.404 drivers: 00:02:44.404 common/cpt: not in enabled drivers build config 00:02:44.404 common/dpaax: not in enabled drivers build config 00:02:44.404 common/iavf: not in enabled drivers build config 00:02:44.404 common/idpf: not in enabled drivers build config 00:02:44.404 common/ionic: not in enabled drivers build config 00:02:44.404 common/mvep: not in enabled drivers build config 00:02:44.404 common/octeontx: not in enabled drivers build config 00:02:44.404 bus/auxiliary: not in enabled drivers build config 00:02:44.404 bus/cdx: not in enabled drivers build config 00:02:44.404 bus/dpaa: not in enabled drivers build config 00:02:44.404 bus/fslmc: not in enabled drivers build config 00:02:44.404 bus/ifpga: not in enabled drivers build config 00:02:44.404 bus/platform: not in enabled drivers build config 00:02:44.404 bus/uacce: not in enabled drivers build config 00:02:44.404 bus/vmbus: not in enabled drivers build config 00:02:44.404 common/cnxk: not in enabled drivers build config 00:02:44.404 common/mlx5: not in enabled drivers build config 00:02:44.404 common/nfp: not in enabled drivers build config 00:02:44.404 common/nitrox: not in enabled drivers build config 00:02:44.404 common/qat: not in enabled drivers build config 00:02:44.404 common/sfc_efx: not in enabled drivers build config 00:02:44.404 mempool/bucket: not in enabled drivers build config 00:02:44.404 mempool/cnxk: not in enabled drivers build config 00:02:44.404 mempool/dpaa: not in enabled drivers build config 00:02:44.404 mempool/dpaa2: not in enabled drivers build config 00:02:44.404 mempool/octeontx: not in enabled drivers build config 00:02:44.404 mempool/stack: not in enabled drivers build config 00:02:44.404 dma/cnxk: not in enabled drivers build config 00:02:44.404 dma/dpaa: not in enabled drivers build config 00:02:44.404 dma/dpaa2: not in enabled drivers build config 00:02:44.404 dma/hisilicon: not in enabled drivers build config 00:02:44.404 dma/idxd: not in enabled drivers build config 00:02:44.404 dma/ioat: not in enabled drivers build config 00:02:44.404 dma/skeleton: not in enabled drivers build config 00:02:44.404 net/af_packet: not in enabled drivers build config 00:02:44.404 net/af_xdp: not in enabled drivers build config 00:02:44.404 net/ark: not in enabled drivers build config 00:02:44.404 net/atlantic: not in enabled drivers build config 00:02:44.404 net/avp: not in enabled drivers build config 00:02:44.404 net/axgbe: not in enabled drivers build config 00:02:44.404 net/bnx2x: not in enabled drivers build config 00:02:44.404 net/bnxt: not in enabled drivers build config 00:02:44.404 net/bonding: not in enabled drivers build config 00:02:44.404 net/cnxk: not in enabled drivers build config 00:02:44.404 net/cpfl: not in enabled drivers build config 00:02:44.404 net/cxgbe: not in enabled drivers build config 00:02:44.404 net/dpaa: not in enabled drivers build config 00:02:44.404 net/dpaa2: not in enabled drivers build config 00:02:44.404 net/e1000: not in enabled drivers build config 00:02:44.404 net/ena: not in enabled drivers build config 00:02:44.404 net/enetc: not in enabled drivers build config 00:02:44.404 net/enetfec: not in enabled drivers build config 00:02:44.404 net/enic: not in enabled drivers build config 00:02:44.404 net/failsafe: not in enabled drivers build config 00:02:44.404 net/fm10k: not in enabled drivers build config 00:02:44.404 net/gve: not in enabled drivers build config 00:02:44.404 net/hinic: not in enabled drivers build config 00:02:44.404 net/hns3: not in enabled drivers build config 00:02:44.404 net/i40e: not in enabled drivers build config 00:02:44.404 net/iavf: not in enabled drivers build config 00:02:44.404 net/ice: not in enabled drivers build config 00:02:44.404 net/idpf: not in enabled drivers build config 00:02:44.404 net/igc: not in enabled drivers build config 00:02:44.404 net/ionic: not in enabled drivers build config 00:02:44.404 net/ipn3ke: not in enabled drivers build config 00:02:44.404 net/ixgbe: not in enabled drivers build config 00:02:44.404 net/mana: not in enabled drivers build config 00:02:44.404 net/memif: not in enabled drivers build config 00:02:44.404 net/mlx4: not in enabled drivers build config 00:02:44.404 net/mlx5: not in enabled drivers build config 00:02:44.404 net/mvneta: not in enabled drivers build config 00:02:44.404 net/mvpp2: not in enabled drivers build config 00:02:44.404 net/netvsc: not in enabled drivers build config 00:02:44.404 net/nfb: not in enabled drivers build config 00:02:44.404 net/nfp: not in enabled drivers build config 00:02:44.404 net/ngbe: not in enabled drivers build config 00:02:44.404 net/null: not in enabled drivers build config 00:02:44.404 net/octeontx: not in enabled drivers build config 00:02:44.404 net/octeon_ep: not in enabled drivers build config 00:02:44.404 net/pcap: not in enabled drivers build config 00:02:44.404 net/pfe: not in enabled drivers build config 00:02:44.404 net/qede: not in enabled drivers build config 00:02:44.404 net/ring: not in enabled drivers build config 00:02:44.404 net/sfc: not in enabled drivers build config 00:02:44.404 net/softnic: not in enabled drivers build config 00:02:44.404 net/tap: not in enabled drivers build config 00:02:44.404 net/thunderx: not in enabled drivers build config 00:02:44.404 net/txgbe: not in enabled drivers build config 00:02:44.404 net/vdev_netvsc: not in enabled drivers build config 00:02:44.404 net/vhost: not in enabled drivers build config 00:02:44.404 net/virtio: not in enabled drivers build config 00:02:44.404 net/vmxnet3: not in enabled drivers build config 00:02:44.404 raw/*: missing internal dependency, "rawdev" 00:02:44.404 crypto/armv8: not in enabled drivers build config 00:02:44.404 crypto/bcmfs: not in enabled drivers build config 00:02:44.404 crypto/caam_jr: not in enabled drivers build config 00:02:44.404 crypto/ccp: not in enabled drivers build config 00:02:44.404 crypto/cnxk: not in enabled drivers build config 00:02:44.404 crypto/dpaa_sec: not in enabled drivers build config 00:02:44.404 crypto/dpaa2_sec: not in enabled drivers build config 00:02:44.404 crypto/ipsec_mb: not in enabled drivers build config 00:02:44.405 crypto/mlx5: not in enabled drivers build config 00:02:44.405 crypto/mvsam: not in enabled drivers build config 00:02:44.405 crypto/nitrox: not in enabled drivers build config 00:02:44.405 crypto/null: not in enabled drivers build config 00:02:44.405 crypto/octeontx: not in enabled drivers build config 00:02:44.405 crypto/openssl: not in enabled drivers build config 00:02:44.405 crypto/scheduler: not in enabled drivers build config 00:02:44.405 crypto/uadk: not in enabled drivers build config 00:02:44.405 crypto/virtio: not in enabled drivers build config 00:02:44.405 compress/isal: not in enabled drivers build config 00:02:44.405 compress/mlx5: not in enabled drivers build config 00:02:44.405 compress/nitrox: not in enabled drivers build config 00:02:44.405 compress/octeontx: not in enabled drivers build config 00:02:44.405 compress/zlib: not in enabled drivers build config 00:02:44.405 regex/*: missing internal dependency, "regexdev" 00:02:44.405 ml/*: missing internal dependency, "mldev" 00:02:44.405 vdpa/ifc: not in enabled drivers build config 00:02:44.405 vdpa/mlx5: not in enabled drivers build config 00:02:44.405 vdpa/nfp: not in enabled drivers build config 00:02:44.405 vdpa/sfc: not in enabled drivers build config 00:02:44.405 event/*: missing internal dependency, "eventdev" 00:02:44.405 baseband/*: missing internal dependency, "bbdev" 00:02:44.405 gpu/*: missing internal dependency, "gpudev" 00:02:44.405 00:02:44.405 00:02:44.405 Build targets in project: 84 00:02:44.405 00:02:44.405 DPDK 24.03.0 00:02:44.405 00:02:44.405 User defined options 00:02:44.405 buildtype : debug 00:02:44.405 default_library : shared 00:02:44.405 libdir : lib 00:02:44.405 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:44.405 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:44.405 c_link_args : 00:02:44.405 cpu_instruction_set: native 00:02:44.405 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:44.405 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:44.405 enable_docs : false 00:02:44.405 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:44.405 enable_kmods : false 00:02:44.405 max_lcores : 128 00:02:44.405 tests : false 00:02:44.405 00:02:44.405 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:44.405 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:44.405 [1/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:44.405 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:44.405 [3/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:44.405 [4/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:44.405 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:44.405 [6/267] Linking static target lib/librte_kvargs.a 00:02:44.405 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:44.405 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:44.405 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:44.405 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:44.405 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:44.405 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:44.405 [13/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:44.405 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:44.405 [15/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:44.405 [16/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:44.405 [17/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:44.405 [18/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:44.405 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:44.405 [20/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:44.405 [21/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:44.405 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:44.405 [23/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:44.405 [24/267] Linking static target lib/librte_log.a 00:02:44.405 [25/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.405 [26/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:44.405 [27/267] Linking static target lib/librte_pci.a 00:02:44.405 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:44.405 [29/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:44.405 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.405 [31/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.405 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:44.405 [33/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:44.405 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.405 [35/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.405 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:44.405 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:44.405 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:44.663 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:44.663 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:44.663 [41/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:44.663 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:44.663 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:44.663 [44/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.663 [45/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:44.663 [46/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:44.663 [47/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:44.663 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:44.663 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:44.663 [50/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:44.663 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:44.663 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:44.663 [53/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.663 [54/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:44.663 [55/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:44.663 [56/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:44.663 [57/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:44.663 [58/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.663 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:44.663 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:44.663 [61/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:44.663 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:44.663 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:44.663 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:44.663 [65/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:44.663 [66/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:44.663 [67/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:44.663 [68/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:44.663 [69/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:44.663 [70/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:44.663 [71/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:44.663 [72/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:44.663 [73/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:44.663 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:44.663 [75/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:44.663 [76/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:44.663 [77/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:44.663 [78/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:44.663 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:44.663 [80/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:44.663 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:44.663 [82/267] Linking static target lib/librte_telemetry.a 00:02:44.663 [83/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.663 [84/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:44.663 [85/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:44.663 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:44.663 [87/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:44.663 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:44.663 [89/267] Linking static target lib/librte_ring.a 00:02:44.663 [90/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:44.663 [91/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.663 [92/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:44.663 [93/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:44.663 [94/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:44.663 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:44.663 [96/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:44.663 [97/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:44.663 [98/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:44.663 [99/267] Linking static target lib/librte_meter.a 00:02:44.663 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:44.663 [101/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:44.663 [102/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:44.663 [103/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:44.663 [104/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:44.664 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:44.664 [106/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.924 [107/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.924 [108/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.924 [109/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:44.924 [110/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.924 [111/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:44.924 [112/267] Linking static target lib/librte_rcu.a 00:02:44.924 [113/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:44.924 [114/267] Linking static target lib/librte_timer.a 00:02:44.924 [115/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:44.924 [116/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.924 [117/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:44.924 [118/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:44.924 [119/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:44.924 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:44.924 [121/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.924 [122/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.924 [123/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:44.924 [124/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:44.924 [125/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:44.924 [126/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:44.924 [127/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:44.924 [128/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:44.924 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:44.924 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:44.924 [131/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:44.924 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:44.924 [133/267] Linking static target lib/librte_dmadev.a 00:02:44.924 [134/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.924 [135/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:44.924 [136/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:44.924 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:44.924 [138/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:44.924 [139/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:44.924 [140/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:44.924 [141/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.924 [142/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:44.924 [143/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:44.924 [144/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:44.924 [145/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:44.924 [146/267] Linking static target lib/librte_mempool.a 00:02:44.924 [147/267] Linking static target lib/librte_compressdev.a 00:02:44.924 [148/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:44.924 [149/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.924 [150/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:44.924 [151/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.924 [152/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:44.924 [153/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:44.924 [154/267] Linking static target lib/librte_cmdline.a 00:02:44.924 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:44.924 [156/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.924 [157/267] Linking static target lib/librte_power.a 00:02:44.924 [158/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:44.924 [159/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.924 [160/267] Linking target lib/librte_log.so.24.1 00:02:44.924 [161/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:44.924 [162/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:44.924 [163/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.924 [164/267] Linking static target lib/librte_net.a 00:02:44.924 [165/267] Linking static target lib/librte_mbuf.a 00:02:44.924 [166/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:44.924 [167/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:44.924 [168/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.924 [169/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:44.924 [170/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:44.924 [171/267] Linking static target lib/librte_security.a 00:02:44.924 [172/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:44.924 [173/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.924 [174/267] Linking static target lib/librte_eal.a 00:02:44.924 [175/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:44.924 [176/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:44.924 [177/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:44.924 [178/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:44.924 [179/267] Linking static target lib/librte_reorder.a 00:02:44.925 [180/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:44.925 [181/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:44.925 [182/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:44.925 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:44.925 [184/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.187 [185/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.187 [186/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.187 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.187 [188/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.187 [189/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:45.187 [190/267] Linking static target drivers/librte_bus_vdev.a 00:02:45.187 [191/267] Linking target lib/librte_kvargs.so.24.1 00:02:45.187 [192/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:45.187 [193/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.187 [194/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:45.187 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:45.187 [196/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.187 [197/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:45.187 [198/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:45.187 [199/267] Linking static target drivers/librte_mempool_ring.a 00:02:45.187 [200/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.187 [201/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:45.187 [202/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:45.187 [203/267] Linking static target drivers/librte_bus_pci.a 00:02:45.187 [204/267] Linking static target lib/librte_hash.a 00:02:45.187 [205/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.187 [206/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.187 [207/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.448 [208/267] Linking target lib/librte_telemetry.so.24.1 00:02:45.449 [209/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:45.449 [210/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:45.449 [211/267] Linking static target lib/librte_cryptodev.a 00:02:45.449 [212/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.449 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:45.449 [214/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.449 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.710 [216/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.710 [217/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.710 [218/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.710 [219/267] Linking static target lib/librte_ethdev.a 00:02:45.710 [220/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:45.710 [221/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.971 [222/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.971 [223/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.971 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.235 [225/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.235 [226/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.807 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:46.807 [228/267] Linking static target lib/librte_vhost.a 00:02:47.749 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.692 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.283 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.670 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.670 [233/267] Linking target lib/librte_eal.so.24.1 00:02:56.930 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:56.930 [235/267] Linking target lib/librte_ring.so.24.1 00:02:56.930 [236/267] Linking target lib/librte_meter.so.24.1 00:02:56.930 [237/267] Linking target lib/librte_pci.so.24.1 00:02:56.930 [238/267] Linking target lib/librte_dmadev.so.24.1 00:02:56.930 [239/267] Linking target lib/librte_timer.so.24.1 00:02:56.930 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:56.930 [241/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:56.930 [242/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:57.191 [243/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:57.191 [244/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:57.191 [245/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:57.191 [246/267] Linking target lib/librte_rcu.so.24.1 00:02:57.191 [247/267] Linking target lib/librte_mempool.so.24.1 00:02:57.191 [248/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:57.191 [249/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:57.191 [250/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:57.191 [251/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:57.191 [252/267] Linking target lib/librte_mbuf.so.24.1 00:02:57.452 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:57.452 [254/267] Linking target lib/librte_net.so.24.1 00:02:57.452 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:57.452 [256/267] Linking target lib/librte_compressdev.so.24.1 00:02:57.452 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:57.715 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:57.715 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:57.715 [260/267] Linking target lib/librte_hash.so.24.1 00:02:57.715 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:57.715 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:57.715 [263/267] Linking target lib/librte_security.so.24.1 00:02:57.715 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:57.976 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:57.976 [266/267] Linking target lib/librte_power.so.24.1 00:02:57.976 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:57.976 INFO: autodetecting backend as ninja 00:02:57.976 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:58.920 CC lib/log/log.o 00:02:58.920 CC lib/log/log_flags.o 00:02:58.920 CC lib/log/log_deprecated.o 00:02:58.920 CC lib/ut_mock/mock.o 00:02:58.920 CC lib/ut/ut.o 00:02:59.182 LIB libspdk_ut_mock.a 00:02:59.182 LIB libspdk_log.a 00:02:59.182 LIB libspdk_ut.a 00:02:59.182 SO libspdk_ut_mock.so.6.0 00:02:59.182 SO libspdk_log.so.7.0 00:02:59.182 SO libspdk_ut.so.2.0 00:02:59.182 SYMLINK libspdk_ut_mock.so 00:02:59.182 SYMLINK libspdk_ut.so 00:02:59.182 SYMLINK libspdk_log.so 00:02:59.755 CC lib/ioat/ioat.o 00:02:59.755 CXX lib/trace_parser/trace.o 00:02:59.755 CC lib/dma/dma.o 00:02:59.755 CC lib/util/base64.o 00:02:59.755 CC lib/util/bit_array.o 00:02:59.755 CC lib/util/cpuset.o 00:02:59.755 CC lib/util/crc16.o 00:02:59.755 CC lib/util/crc32.o 00:02:59.755 CC lib/util/crc32c.o 00:02:59.755 CC lib/util/dif.o 00:02:59.755 CC lib/util/crc32_ieee.o 00:02:59.755 CC lib/util/crc64.o 00:02:59.755 CC lib/util/fd.o 00:02:59.755 CC lib/util/fd_group.o 00:02:59.755 CC lib/util/file.o 00:02:59.755 CC lib/util/hexlify.o 00:02:59.755 CC lib/util/iov.o 00:02:59.755 CC lib/util/math.o 00:02:59.755 CC lib/util/net.o 00:02:59.755 CC lib/util/pipe.o 00:02:59.755 CC lib/util/strerror_tls.o 00:02:59.755 CC lib/util/string.o 00:02:59.755 CC lib/util/uuid.o 00:02:59.755 CC lib/util/xor.o 00:02:59.755 CC lib/util/zipf.o 00:02:59.755 CC lib/vfio_user/host/vfio_user_pci.o 00:02:59.755 CC lib/vfio_user/host/vfio_user.o 00:03:00.017 LIB libspdk_dma.a 00:03:00.017 SO libspdk_dma.so.4.0 00:03:00.017 LIB libspdk_ioat.a 00:03:00.017 SYMLINK libspdk_dma.so 00:03:00.017 SO libspdk_ioat.so.7.0 00:03:00.017 SYMLINK libspdk_ioat.so 00:03:00.017 LIB libspdk_vfio_user.a 00:03:00.278 SO libspdk_vfio_user.so.5.0 00:03:00.278 LIB libspdk_util.a 00:03:00.278 SYMLINK libspdk_vfio_user.so 00:03:00.278 SO libspdk_util.so.9.1 00:03:00.541 SYMLINK libspdk_util.so 00:03:00.541 LIB libspdk_trace_parser.a 00:03:00.541 SO libspdk_trace_parser.so.5.0 00:03:00.803 SYMLINK libspdk_trace_parser.so 00:03:00.803 CC lib/json/json_parse.o 00:03:00.803 CC lib/json/json_util.o 00:03:00.803 CC lib/rdma_provider/common.o 00:03:00.803 CC lib/json/json_write.o 00:03:00.803 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:00.803 CC lib/conf/conf.o 00:03:00.803 CC lib/rdma_utils/rdma_utils.o 00:03:00.803 CC lib/idxd/idxd.o 00:03:00.803 CC lib/idxd/idxd_user.o 00:03:00.803 CC lib/vmd/vmd.o 00:03:00.803 CC lib/idxd/idxd_kernel.o 00:03:00.803 CC lib/vmd/led.o 00:03:00.803 CC lib/env_dpdk/env.o 00:03:00.803 CC lib/env_dpdk/memory.o 00:03:00.803 CC lib/env_dpdk/pci.o 00:03:00.803 CC lib/env_dpdk/init.o 00:03:00.803 CC lib/env_dpdk/threads.o 00:03:00.803 CC lib/env_dpdk/pci_ioat.o 00:03:00.803 CC lib/env_dpdk/pci_virtio.o 00:03:00.803 CC lib/env_dpdk/pci_vmd.o 00:03:00.803 CC lib/env_dpdk/pci_idxd.o 00:03:00.803 CC lib/env_dpdk/pci_event.o 00:03:00.803 CC lib/env_dpdk/sigbus_handler.o 00:03:00.803 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.803 CC lib/env_dpdk/pci_dpdk.o 00:03:00.803 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:01.064 LIB libspdk_rdma_provider.a 00:03:01.064 SO libspdk_rdma_provider.so.6.0 00:03:01.064 LIB libspdk_conf.a 00:03:01.064 LIB libspdk_rdma_utils.a 00:03:01.064 LIB libspdk_json.a 00:03:01.064 SO libspdk_conf.so.6.0 00:03:01.064 SO libspdk_rdma_utils.so.1.0 00:03:01.064 SYMLINK libspdk_rdma_provider.so 00:03:01.064 SO libspdk_json.so.6.0 00:03:01.064 SYMLINK libspdk_conf.so 00:03:01.325 SYMLINK libspdk_rdma_utils.so 00:03:01.325 SYMLINK libspdk_json.so 00:03:01.325 LIB libspdk_idxd.a 00:03:01.325 SO libspdk_idxd.so.12.0 00:03:01.325 LIB libspdk_vmd.a 00:03:01.325 SO libspdk_vmd.so.6.0 00:03:01.586 SYMLINK libspdk_idxd.so 00:03:01.586 SYMLINK libspdk_vmd.so 00:03:01.586 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.586 CC lib/jsonrpc/jsonrpc_client.o 00:03:01.586 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:01.586 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:01.847 LIB libspdk_jsonrpc.a 00:03:01.847 SO libspdk_jsonrpc.so.6.0 00:03:01.847 SYMLINK libspdk_jsonrpc.so 00:03:02.107 LIB libspdk_env_dpdk.a 00:03:02.107 SO libspdk_env_dpdk.so.15.0 00:03:02.107 SYMLINK libspdk_env_dpdk.so 00:03:02.368 CC lib/rpc/rpc.o 00:03:02.628 LIB libspdk_rpc.a 00:03:02.628 SO libspdk_rpc.so.6.0 00:03:02.628 SYMLINK libspdk_rpc.so 00:03:02.890 CC lib/trace/trace.o 00:03:02.890 CC lib/trace/trace_flags.o 00:03:02.890 CC lib/trace/trace_rpc.o 00:03:02.890 CC lib/keyring/keyring.o 00:03:02.890 CC lib/notify/notify.o 00:03:02.890 CC lib/keyring/keyring_rpc.o 00:03:02.890 CC lib/notify/notify_rpc.o 00:03:03.151 LIB libspdk_notify.a 00:03:03.151 SO libspdk_notify.so.6.0 00:03:03.151 LIB libspdk_trace.a 00:03:03.151 LIB libspdk_keyring.a 00:03:03.151 SO libspdk_trace.so.10.0 00:03:03.151 SO libspdk_keyring.so.1.0 00:03:03.151 SYMLINK libspdk_notify.so 00:03:03.412 SYMLINK libspdk_trace.so 00:03:03.412 SYMLINK libspdk_keyring.so 00:03:03.673 CC lib/sock/sock.o 00:03:03.673 CC lib/thread/thread.o 00:03:03.673 CC lib/sock/sock_rpc.o 00:03:03.673 CC lib/thread/iobuf.o 00:03:03.934 LIB libspdk_sock.a 00:03:03.934 SO libspdk_sock.so.10.0 00:03:04.196 SYMLINK libspdk_sock.so 00:03:04.458 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:04.458 CC lib/nvme/nvme_ctrlr.o 00:03:04.458 CC lib/nvme/nvme_ns_cmd.o 00:03:04.458 CC lib/nvme/nvme_fabric.o 00:03:04.458 CC lib/nvme/nvme_ns.o 00:03:04.458 CC lib/nvme/nvme_qpair.o 00:03:04.458 CC lib/nvme/nvme_pcie_common.o 00:03:04.458 CC lib/nvme/nvme_pcie.o 00:03:04.458 CC lib/nvme/nvme.o 00:03:04.458 CC lib/nvme/nvme_quirks.o 00:03:04.458 CC lib/nvme/nvme_transport.o 00:03:04.458 CC lib/nvme/nvme_discovery.o 00:03:04.458 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:04.458 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:04.458 CC lib/nvme/nvme_tcp.o 00:03:04.458 CC lib/nvme/nvme_opal.o 00:03:04.458 CC lib/nvme/nvme_io_msg.o 00:03:04.458 CC lib/nvme/nvme_poll_group.o 00:03:04.458 CC lib/nvme/nvme_zns.o 00:03:04.458 CC lib/nvme/nvme_stubs.o 00:03:04.458 CC lib/nvme/nvme_auth.o 00:03:04.458 CC lib/nvme/nvme_cuse.o 00:03:04.458 CC lib/nvme/nvme_vfio_user.o 00:03:04.458 CC lib/nvme/nvme_rdma.o 00:03:05.031 LIB libspdk_thread.a 00:03:05.031 SO libspdk_thread.so.10.1 00:03:05.031 SYMLINK libspdk_thread.so 00:03:05.292 CC lib/blob/blobstore.o 00:03:05.292 CC lib/blob/request.o 00:03:05.292 CC lib/blob/zeroes.o 00:03:05.292 CC lib/blob/blob_bs_dev.o 00:03:05.292 CC lib/accel/accel.o 00:03:05.292 CC lib/init/json_config.o 00:03:05.292 CC lib/accel/accel_rpc.o 00:03:05.292 CC lib/init/subsystem.o 00:03:05.292 CC lib/accel/accel_sw.o 00:03:05.292 CC lib/init/subsystem_rpc.o 00:03:05.292 CC lib/init/rpc.o 00:03:05.292 CC lib/vfu_tgt/tgt_endpoint.o 00:03:05.292 CC lib/virtio/virtio.o 00:03:05.292 CC lib/vfu_tgt/tgt_rpc.o 00:03:05.292 CC lib/virtio/virtio_vhost_user.o 00:03:05.292 CC lib/virtio/virtio_vfio_user.o 00:03:05.292 CC lib/virtio/virtio_pci.o 00:03:05.554 LIB libspdk_init.a 00:03:05.815 SO libspdk_init.so.5.0 00:03:05.815 LIB libspdk_virtio.a 00:03:05.815 LIB libspdk_vfu_tgt.a 00:03:05.815 SO libspdk_vfu_tgt.so.3.0 00:03:05.815 SO libspdk_virtio.so.7.0 00:03:05.815 SYMLINK libspdk_init.so 00:03:05.815 SYMLINK libspdk_vfu_tgt.so 00:03:05.815 SYMLINK libspdk_virtio.so 00:03:06.077 CC lib/event/app.o 00:03:06.077 CC lib/event/reactor.o 00:03:06.077 CC lib/event/log_rpc.o 00:03:06.077 CC lib/event/app_rpc.o 00:03:06.077 CC lib/event/scheduler_static.o 00:03:06.339 LIB libspdk_accel.a 00:03:06.339 LIB libspdk_nvme.a 00:03:06.339 SO libspdk_accel.so.15.1 00:03:06.339 SYMLINK libspdk_accel.so 00:03:06.339 SO libspdk_nvme.so.13.1 00:03:06.602 LIB libspdk_event.a 00:03:06.602 SO libspdk_event.so.14.0 00:03:06.602 SYMLINK libspdk_event.so 00:03:06.602 CC lib/bdev/bdev.o 00:03:06.602 CC lib/bdev/bdev_rpc.o 00:03:06.602 CC lib/bdev/bdev_zone.o 00:03:06.602 CC lib/bdev/part.o 00:03:06.602 CC lib/bdev/scsi_nvme.o 00:03:06.876 SYMLINK libspdk_nvme.so 00:03:07.817 LIB libspdk_blob.a 00:03:08.078 SO libspdk_blob.so.11.0 00:03:08.078 SYMLINK libspdk_blob.so 00:03:08.339 CC lib/blobfs/blobfs.o 00:03:08.339 CC lib/lvol/lvol.o 00:03:08.339 CC lib/blobfs/tree.o 00:03:08.951 LIB libspdk_bdev.a 00:03:08.951 SO libspdk_bdev.so.15.1 00:03:09.223 LIB libspdk_blobfs.a 00:03:09.223 SYMLINK libspdk_bdev.so 00:03:09.223 SO libspdk_blobfs.so.10.0 00:03:09.223 LIB libspdk_lvol.a 00:03:09.223 SYMLINK libspdk_blobfs.so 00:03:09.223 SO libspdk_lvol.so.10.0 00:03:09.223 SYMLINK libspdk_lvol.so 00:03:09.485 CC lib/ublk/ublk.o 00:03:09.485 CC lib/ublk/ublk_rpc.o 00:03:09.485 CC lib/scsi/dev.o 00:03:09.485 CC lib/scsi/lun.o 00:03:09.485 CC lib/scsi/port.o 00:03:09.485 CC lib/scsi/scsi.o 00:03:09.485 CC lib/scsi/scsi_bdev.o 00:03:09.485 CC lib/scsi/scsi_pr.o 00:03:09.485 CC lib/nbd/nbd.o 00:03:09.485 CC lib/scsi/scsi_rpc.o 00:03:09.485 CC lib/nbd/nbd_rpc.o 00:03:09.485 CC lib/nvmf/ctrlr.o 00:03:09.485 CC lib/scsi/task.o 00:03:09.485 CC lib/nvmf/ctrlr_discovery.o 00:03:09.485 CC lib/nvmf/ctrlr_bdev.o 00:03:09.485 CC lib/nvmf/subsystem.o 00:03:09.485 CC lib/nvmf/nvmf.o 00:03:09.485 CC lib/nvmf/nvmf_rpc.o 00:03:09.485 CC lib/nvmf/transport.o 00:03:09.485 CC lib/nvmf/tcp.o 00:03:09.485 CC lib/ftl/ftl_core.o 00:03:09.485 CC lib/ftl/ftl_init.o 00:03:09.485 CC lib/nvmf/stubs.o 00:03:09.485 CC lib/nvmf/mdns_server.o 00:03:09.485 CC lib/ftl/ftl_layout.o 00:03:09.485 CC lib/nvmf/vfio_user.o 00:03:09.485 CC lib/ftl/ftl_debug.o 00:03:09.485 CC lib/nvmf/rdma.o 00:03:09.485 CC lib/ftl/ftl_io.o 00:03:09.485 CC lib/nvmf/auth.o 00:03:09.485 CC lib/ftl/ftl_sb.o 00:03:09.485 CC lib/ftl/ftl_l2p.o 00:03:09.485 CC lib/ftl/ftl_l2p_flat.o 00:03:09.485 CC lib/ftl/ftl_nv_cache.o 00:03:09.485 CC lib/ftl/ftl_band.o 00:03:09.485 CC lib/ftl/ftl_band_ops.o 00:03:09.485 CC lib/ftl/ftl_rq.o 00:03:09.485 CC lib/ftl/ftl_writer.o 00:03:09.485 CC lib/ftl/ftl_reloc.o 00:03:09.485 CC lib/ftl/ftl_l2p_cache.o 00:03:09.485 CC lib/ftl/ftl_p2l.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:09.485 CC lib/ftl/utils/ftl_md.o 00:03:09.485 CC lib/ftl/utils/ftl_conf.o 00:03:09.485 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:09.485 CC lib/ftl/utils/ftl_mempool.o 00:03:09.485 CC lib/ftl/utils/ftl_property.o 00:03:09.485 CC lib/ftl/utils/ftl_bitmap.o 00:03:09.485 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:09.485 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:09.485 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:09.485 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:09.485 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:09.485 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:09.485 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:09.485 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.485 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.485 CC lib/ftl/base/ftl_base_dev.o 00:03:09.485 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.485 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.485 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.485 CC lib/ftl/ftl_trace.o 00:03:10.050 LIB libspdk_nbd.a 00:03:10.051 LIB libspdk_scsi.a 00:03:10.051 SO libspdk_nbd.so.7.0 00:03:10.051 SO libspdk_scsi.so.9.0 00:03:10.051 SYMLINK libspdk_nbd.so 00:03:10.310 SYMLINK libspdk_scsi.so 00:03:10.310 LIB libspdk_ublk.a 00:03:10.310 SO libspdk_ublk.so.3.0 00:03:10.310 SYMLINK libspdk_ublk.so 00:03:10.568 CC lib/vhost/vhost.o 00:03:10.568 CC lib/vhost/vhost_rpc.o 00:03:10.568 CC lib/vhost/vhost_scsi.o 00:03:10.568 CC lib/vhost/vhost_blk.o 00:03:10.568 CC lib/vhost/rte_vhost_user.o 00:03:10.568 CC lib/iscsi/conn.o 00:03:10.568 CC lib/iscsi/init_grp.o 00:03:10.568 LIB libspdk_ftl.a 00:03:10.568 CC lib/iscsi/iscsi.o 00:03:10.568 CC lib/iscsi/md5.o 00:03:10.568 CC lib/iscsi/portal_grp.o 00:03:10.568 CC lib/iscsi/param.o 00:03:10.568 CC lib/iscsi/tgt_node.o 00:03:10.568 CC lib/iscsi/iscsi_subsystem.o 00:03:10.568 CC lib/iscsi/iscsi_rpc.o 00:03:10.568 CC lib/iscsi/task.o 00:03:10.568 SO libspdk_ftl.so.9.0 00:03:11.138 SYMLINK libspdk_ftl.so 00:03:11.138 LIB libspdk_nvmf.a 00:03:11.398 SO libspdk_nvmf.so.18.1 00:03:11.398 LIB libspdk_vhost.a 00:03:11.398 SO libspdk_vhost.so.8.0 00:03:11.658 SYMLINK libspdk_nvmf.so 00:03:11.658 SYMLINK libspdk_vhost.so 00:03:11.658 LIB libspdk_iscsi.a 00:03:11.658 SO libspdk_iscsi.so.8.0 00:03:11.918 SYMLINK libspdk_iscsi.so 00:03:12.490 CC module/env_dpdk/env_dpdk_rpc.o 00:03:12.490 CC module/vfu_device/vfu_virtio.o 00:03:12.490 CC module/vfu_device/vfu_virtio_blk.o 00:03:12.490 CC module/vfu_device/vfu_virtio_scsi.o 00:03:12.490 CC module/vfu_device/vfu_virtio_rpc.o 00:03:12.490 LIB libspdk_env_dpdk_rpc.a 00:03:12.490 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:12.490 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:12.490 CC module/accel/ioat/accel_ioat.o 00:03:12.490 CC module/accel/ioat/accel_ioat_rpc.o 00:03:12.490 CC module/accel/iaa/accel_iaa.o 00:03:12.490 CC module/accel/iaa/accel_iaa_rpc.o 00:03:12.490 CC module/accel/dsa/accel_dsa.o 00:03:12.490 CC module/sock/posix/posix.o 00:03:12.490 CC module/accel/dsa/accel_dsa_rpc.o 00:03:12.490 CC module/blob/bdev/blob_bdev.o 00:03:12.490 CC module/accel/error/accel_error.o 00:03:12.490 SO libspdk_env_dpdk_rpc.so.6.0 00:03:12.490 CC module/accel/error/accel_error_rpc.o 00:03:12.490 CC module/keyring/linux/keyring.o 00:03:12.490 CC module/keyring/file/keyring.o 00:03:12.490 CC module/keyring/linux/keyring_rpc.o 00:03:12.490 CC module/scheduler/gscheduler/gscheduler.o 00:03:12.490 CC module/keyring/file/keyring_rpc.o 00:03:12.751 SYMLINK libspdk_env_dpdk_rpc.so 00:03:12.751 LIB libspdk_scheduler_dpdk_governor.a 00:03:12.751 LIB libspdk_keyring_file.a 00:03:12.751 LIB libspdk_keyring_linux.a 00:03:12.751 LIB libspdk_scheduler_gscheduler.a 00:03:12.751 LIB libspdk_scheduler_dynamic.a 00:03:12.751 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:12.751 SO libspdk_scheduler_gscheduler.so.4.0 00:03:12.751 LIB libspdk_accel_error.a 00:03:12.751 LIB libspdk_accel_ioat.a 00:03:12.751 SO libspdk_keyring_file.so.1.0 00:03:12.751 SO libspdk_keyring_linux.so.1.0 00:03:12.751 SO libspdk_scheduler_dynamic.so.4.0 00:03:12.751 LIB libspdk_accel_iaa.a 00:03:12.751 SYMLINK libspdk_scheduler_gscheduler.so 00:03:12.751 LIB libspdk_accel_dsa.a 00:03:12.751 SO libspdk_accel_ioat.so.6.0 00:03:12.751 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:12.751 SO libspdk_accel_error.so.2.0 00:03:12.751 SO libspdk_accel_iaa.so.3.0 00:03:12.751 LIB libspdk_blob_bdev.a 00:03:12.751 SYMLINK libspdk_keyring_file.so 00:03:12.751 SYMLINK libspdk_keyring_linux.so 00:03:12.751 SYMLINK libspdk_scheduler_dynamic.so 00:03:12.751 SO libspdk_accel_dsa.so.5.0 00:03:13.012 SO libspdk_blob_bdev.so.11.0 00:03:13.012 SYMLINK libspdk_accel_ioat.so 00:03:13.012 SYMLINK libspdk_accel_error.so 00:03:13.012 SYMLINK libspdk_accel_iaa.so 00:03:13.012 SYMLINK libspdk_accel_dsa.so 00:03:13.012 LIB libspdk_vfu_device.a 00:03:13.012 SYMLINK libspdk_blob_bdev.so 00:03:13.012 SO libspdk_vfu_device.so.3.0 00:03:13.012 SYMLINK libspdk_vfu_device.so 00:03:13.273 LIB libspdk_sock_posix.a 00:03:13.274 SO libspdk_sock_posix.so.6.0 00:03:13.274 SYMLINK libspdk_sock_posix.so 00:03:13.534 CC module/bdev/lvol/vbdev_lvol.o 00:03:13.534 CC module/bdev/gpt/gpt.o 00:03:13.534 CC module/bdev/gpt/vbdev_gpt.o 00:03:13.534 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:13.534 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:13.534 CC module/bdev/delay/vbdev_delay.o 00:03:13.534 CC module/bdev/error/vbdev_error.o 00:03:13.534 CC module/bdev/error/vbdev_error_rpc.o 00:03:13.534 CC module/bdev/null/bdev_null_rpc.o 00:03:13.534 CC module/bdev/null/bdev_null.o 00:03:13.534 CC module/bdev/passthru/vbdev_passthru.o 00:03:13.534 CC module/blobfs/bdev/blobfs_bdev.o 00:03:13.534 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:13.534 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:13.534 CC module/bdev/malloc/bdev_malloc.o 00:03:13.534 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:13.534 CC module/bdev/iscsi/bdev_iscsi.o 00:03:13.534 CC module/bdev/raid/bdev_raid.o 00:03:13.534 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:13.534 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:13.534 CC module/bdev/raid/bdev_raid_rpc.o 00:03:13.534 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:13.534 CC module/bdev/raid/bdev_raid_sb.o 00:03:13.534 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:13.534 CC module/bdev/nvme/bdev_nvme.o 00:03:13.534 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:13.534 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:13.534 CC module/bdev/raid/raid0.o 00:03:13.534 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:13.534 CC module/bdev/split/vbdev_split.o 00:03:13.534 CC module/bdev/raid/concat.o 00:03:13.534 CC module/bdev/nvme/nvme_rpc.o 00:03:13.534 CC module/bdev/split/vbdev_split_rpc.o 00:03:13.534 CC module/bdev/nvme/bdev_mdns_client.o 00:03:13.534 CC module/bdev/raid/raid1.o 00:03:13.534 CC module/bdev/nvme/vbdev_opal.o 00:03:13.534 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:13.534 CC module/bdev/aio/bdev_aio.o 00:03:13.534 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:13.534 CC module/bdev/aio/bdev_aio_rpc.o 00:03:13.534 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:13.534 CC module/bdev/ftl/bdev_ftl.o 00:03:13.794 LIB libspdk_blobfs_bdev.a 00:03:13.794 LIB libspdk_bdev_split.a 00:03:13.794 LIB libspdk_bdev_null.a 00:03:13.794 LIB libspdk_bdev_gpt.a 00:03:13.794 SO libspdk_blobfs_bdev.so.6.0 00:03:13.794 LIB libspdk_bdev_error.a 00:03:13.794 SO libspdk_bdev_split.so.6.0 00:03:13.794 LIB libspdk_bdev_passthru.a 00:03:13.794 SO libspdk_bdev_null.so.6.0 00:03:13.794 SO libspdk_bdev_gpt.so.6.0 00:03:13.794 SO libspdk_bdev_error.so.6.0 00:03:13.794 LIB libspdk_bdev_zone_block.a 00:03:13.794 SO libspdk_bdev_passthru.so.6.0 00:03:13.794 LIB libspdk_bdev_ftl.a 00:03:13.794 LIB libspdk_bdev_delay.a 00:03:13.794 SYMLINK libspdk_blobfs_bdev.so 00:03:13.794 LIB libspdk_bdev_aio.a 00:03:13.794 LIB libspdk_bdev_malloc.a 00:03:13.794 SYMLINK libspdk_bdev_split.so 00:03:13.794 LIB libspdk_bdev_iscsi.a 00:03:14.055 SO libspdk_bdev_zone_block.so.6.0 00:03:14.055 SO libspdk_bdev_ftl.so.6.0 00:03:14.055 SYMLINK libspdk_bdev_null.so 00:03:14.055 SO libspdk_bdev_delay.so.6.0 00:03:14.055 SYMLINK libspdk_bdev_error.so 00:03:14.055 SYMLINK libspdk_bdev_gpt.so 00:03:14.055 SO libspdk_bdev_malloc.so.6.0 00:03:14.055 SO libspdk_bdev_aio.so.6.0 00:03:14.055 SO libspdk_bdev_iscsi.so.6.0 00:03:14.055 SYMLINK libspdk_bdev_passthru.so 00:03:14.055 LIB libspdk_bdev_lvol.a 00:03:14.055 SYMLINK libspdk_bdev_zone_block.so 00:03:14.055 SO libspdk_bdev_lvol.so.6.0 00:03:14.055 SYMLINK libspdk_bdev_ftl.so 00:03:14.055 SYMLINK libspdk_bdev_delay.so 00:03:14.055 SYMLINK libspdk_bdev_malloc.so 00:03:14.055 SYMLINK libspdk_bdev_aio.so 00:03:14.055 SYMLINK libspdk_bdev_iscsi.so 00:03:14.055 LIB libspdk_bdev_virtio.a 00:03:14.055 SYMLINK libspdk_bdev_lvol.so 00:03:14.055 SO libspdk_bdev_virtio.so.6.0 00:03:14.055 SYMLINK libspdk_bdev_virtio.so 00:03:14.316 LIB libspdk_bdev_raid.a 00:03:14.577 SO libspdk_bdev_raid.so.6.0 00:03:14.577 SYMLINK libspdk_bdev_raid.so 00:03:15.520 LIB libspdk_bdev_nvme.a 00:03:15.520 SO libspdk_bdev_nvme.so.7.0 00:03:15.520 SYMLINK libspdk_bdev_nvme.so 00:03:16.467 CC module/event/subsystems/vmd/vmd.o 00:03:16.467 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:16.467 CC module/event/subsystems/iobuf/iobuf.o 00:03:16.467 CC module/event/subsystems/sock/sock.o 00:03:16.467 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:16.467 CC module/event/subsystems/keyring/keyring.o 00:03:16.467 CC module/event/subsystems/scheduler/scheduler.o 00:03:16.467 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:16.467 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:16.467 LIB libspdk_event_sock.a 00:03:16.467 LIB libspdk_event_vhost_blk.a 00:03:16.467 LIB libspdk_event_vfu_tgt.a 00:03:16.467 LIB libspdk_event_vmd.a 00:03:16.467 LIB libspdk_event_keyring.a 00:03:16.467 SO libspdk_event_sock.so.5.0 00:03:16.467 LIB libspdk_event_scheduler.a 00:03:16.467 LIB libspdk_event_iobuf.a 00:03:16.467 SO libspdk_event_vhost_blk.so.3.0 00:03:16.467 SO libspdk_event_vfu_tgt.so.3.0 00:03:16.467 SO libspdk_event_keyring.so.1.0 00:03:16.467 SO libspdk_event_vmd.so.6.0 00:03:16.467 SO libspdk_event_scheduler.so.4.0 00:03:16.467 SYMLINK libspdk_event_sock.so 00:03:16.467 SO libspdk_event_iobuf.so.3.0 00:03:16.467 SYMLINK libspdk_event_vhost_blk.so 00:03:16.467 SYMLINK libspdk_event_vfu_tgt.so 00:03:16.728 SYMLINK libspdk_event_keyring.so 00:03:16.728 SYMLINK libspdk_event_vmd.so 00:03:16.728 SYMLINK libspdk_event_scheduler.so 00:03:16.728 SYMLINK libspdk_event_iobuf.so 00:03:16.988 CC module/event/subsystems/accel/accel.o 00:03:17.248 LIB libspdk_event_accel.a 00:03:17.248 SO libspdk_event_accel.so.6.0 00:03:17.248 SYMLINK libspdk_event_accel.so 00:03:17.509 CC module/event/subsystems/bdev/bdev.o 00:03:17.772 LIB libspdk_event_bdev.a 00:03:17.772 SO libspdk_event_bdev.so.6.0 00:03:17.772 SYMLINK libspdk_event_bdev.so 00:03:18.033 CC module/event/subsystems/nbd/nbd.o 00:03:18.033 CC module/event/subsystems/scsi/scsi.o 00:03:18.033 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:18.033 CC module/event/subsystems/ublk/ublk.o 00:03:18.033 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:18.295 LIB libspdk_event_nbd.a 00:03:18.295 LIB libspdk_event_ublk.a 00:03:18.295 LIB libspdk_event_scsi.a 00:03:18.295 SO libspdk_event_nbd.so.6.0 00:03:18.295 SO libspdk_event_ublk.so.3.0 00:03:18.295 SO libspdk_event_scsi.so.6.0 00:03:18.295 LIB libspdk_event_nvmf.a 00:03:18.295 SO libspdk_event_nvmf.so.6.0 00:03:18.295 SYMLINK libspdk_event_nbd.so 00:03:18.295 SYMLINK libspdk_event_ublk.so 00:03:18.295 SYMLINK libspdk_event_scsi.so 00:03:18.556 SYMLINK libspdk_event_nvmf.so 00:03:18.818 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:18.818 CC module/event/subsystems/iscsi/iscsi.o 00:03:18.818 LIB libspdk_event_vhost_scsi.a 00:03:19.079 LIB libspdk_event_iscsi.a 00:03:19.079 SO libspdk_event_vhost_scsi.so.3.0 00:03:19.079 SO libspdk_event_iscsi.so.6.0 00:03:19.079 SYMLINK libspdk_event_vhost_scsi.so 00:03:19.079 SYMLINK libspdk_event_iscsi.so 00:03:19.341 SO libspdk.so.6.0 00:03:19.341 SYMLINK libspdk.so 00:03:19.604 CC app/spdk_lspci/spdk_lspci.o 00:03:19.604 CXX app/trace/trace.o 00:03:19.604 CC app/spdk_nvme_discover/discovery_aer.o 00:03:19.604 CC app/trace_record/trace_record.o 00:03:19.604 CC app/spdk_nvme_perf/perf.o 00:03:19.604 CC app/spdk_top/spdk_top.o 00:03:19.604 TEST_HEADER include/spdk/accel.h 00:03:19.604 TEST_HEADER include/spdk/accel_module.h 00:03:19.604 CC app/spdk_nvme_identify/identify.o 00:03:19.604 CC test/rpc_client/rpc_client_test.o 00:03:19.604 TEST_HEADER include/spdk/barrier.h 00:03:19.604 TEST_HEADER include/spdk/assert.h 00:03:19.604 TEST_HEADER include/spdk/base64.h 00:03:19.604 TEST_HEADER include/spdk/bdev.h 00:03:19.604 TEST_HEADER include/spdk/bdev_module.h 00:03:19.604 TEST_HEADER include/spdk/bdev_zone.h 00:03:19.604 TEST_HEADER include/spdk/bit_array.h 00:03:19.604 TEST_HEADER include/spdk/bit_pool.h 00:03:19.604 TEST_HEADER include/spdk/blob_bdev.h 00:03:19.604 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:19.604 TEST_HEADER include/spdk/blobfs.h 00:03:19.604 TEST_HEADER include/spdk/blob.h 00:03:19.604 TEST_HEADER include/spdk/conf.h 00:03:19.604 TEST_HEADER include/spdk/config.h 00:03:19.604 TEST_HEADER include/spdk/crc16.h 00:03:19.604 TEST_HEADER include/spdk/cpuset.h 00:03:19.604 TEST_HEADER include/spdk/crc64.h 00:03:19.604 TEST_HEADER include/spdk/crc32.h 00:03:19.604 TEST_HEADER include/spdk/dif.h 00:03:19.604 TEST_HEADER include/spdk/dma.h 00:03:19.604 TEST_HEADER include/spdk/endian.h 00:03:19.604 TEST_HEADER include/spdk/env_dpdk.h 00:03:19.604 CC app/nvmf_tgt/nvmf_main.o 00:03:19.604 TEST_HEADER include/spdk/env.h 00:03:19.604 TEST_HEADER include/spdk/event.h 00:03:19.604 TEST_HEADER include/spdk/fd_group.h 00:03:19.604 TEST_HEADER include/spdk/fd.h 00:03:19.604 TEST_HEADER include/spdk/file.h 00:03:19.604 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:19.604 TEST_HEADER include/spdk/ftl.h 00:03:19.604 TEST_HEADER include/spdk/gpt_spec.h 00:03:19.604 TEST_HEADER include/spdk/hexlify.h 00:03:19.604 CC app/spdk_dd/spdk_dd.o 00:03:19.604 TEST_HEADER include/spdk/histogram_data.h 00:03:19.604 TEST_HEADER include/spdk/idxd.h 00:03:19.604 TEST_HEADER include/spdk/init.h 00:03:19.604 TEST_HEADER include/spdk/idxd_spec.h 00:03:19.604 CC app/iscsi_tgt/iscsi_tgt.o 00:03:19.604 TEST_HEADER include/spdk/ioat.h 00:03:19.863 TEST_HEADER include/spdk/ioat_spec.h 00:03:19.863 TEST_HEADER include/spdk/json.h 00:03:19.863 TEST_HEADER include/spdk/iscsi_spec.h 00:03:19.863 TEST_HEADER include/spdk/jsonrpc.h 00:03:19.863 TEST_HEADER include/spdk/keyring.h 00:03:19.863 TEST_HEADER include/spdk/keyring_module.h 00:03:19.863 TEST_HEADER include/spdk/likely.h 00:03:19.863 TEST_HEADER include/spdk/log.h 00:03:19.863 TEST_HEADER include/spdk/lvol.h 00:03:19.863 TEST_HEADER include/spdk/mmio.h 00:03:19.863 TEST_HEADER include/spdk/memory.h 00:03:19.863 CC app/spdk_tgt/spdk_tgt.o 00:03:19.863 TEST_HEADER include/spdk/nbd.h 00:03:19.863 TEST_HEADER include/spdk/net.h 00:03:19.863 TEST_HEADER include/spdk/notify.h 00:03:19.863 TEST_HEADER include/spdk/nvme.h 00:03:19.863 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:19.863 TEST_HEADER include/spdk/nvme_intel.h 00:03:19.863 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:19.863 TEST_HEADER include/spdk/nvme_zns.h 00:03:19.863 TEST_HEADER include/spdk/nvme_spec.h 00:03:19.863 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:19.863 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:19.863 TEST_HEADER include/spdk/nvmf.h 00:03:19.863 TEST_HEADER include/spdk/nvmf_spec.h 00:03:19.863 TEST_HEADER include/spdk/nvmf_transport.h 00:03:19.863 TEST_HEADER include/spdk/opal.h 00:03:19.863 TEST_HEADER include/spdk/opal_spec.h 00:03:19.863 TEST_HEADER include/spdk/pci_ids.h 00:03:19.863 TEST_HEADER include/spdk/pipe.h 00:03:19.863 TEST_HEADER include/spdk/queue.h 00:03:19.863 TEST_HEADER include/spdk/reduce.h 00:03:19.863 TEST_HEADER include/spdk/rpc.h 00:03:19.863 TEST_HEADER include/spdk/scheduler.h 00:03:19.863 TEST_HEADER include/spdk/scsi.h 00:03:19.863 TEST_HEADER include/spdk/scsi_spec.h 00:03:19.863 TEST_HEADER include/spdk/sock.h 00:03:19.863 TEST_HEADER include/spdk/thread.h 00:03:19.863 TEST_HEADER include/spdk/string.h 00:03:19.863 TEST_HEADER include/spdk/stdinc.h 00:03:19.863 TEST_HEADER include/spdk/trace_parser.h 00:03:19.863 TEST_HEADER include/spdk/trace.h 00:03:19.863 TEST_HEADER include/spdk/ublk.h 00:03:19.863 TEST_HEADER include/spdk/tree.h 00:03:19.863 TEST_HEADER include/spdk/uuid.h 00:03:19.863 TEST_HEADER include/spdk/util.h 00:03:19.863 TEST_HEADER include/spdk/version.h 00:03:19.863 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:19.863 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:19.863 TEST_HEADER include/spdk/vhost.h 00:03:19.863 TEST_HEADER include/spdk/vmd.h 00:03:19.863 TEST_HEADER include/spdk/xor.h 00:03:19.863 TEST_HEADER include/spdk/zipf.h 00:03:19.863 CXX test/cpp_headers/accel.o 00:03:19.863 CXX test/cpp_headers/accel_module.o 00:03:19.863 CXX test/cpp_headers/assert.o 00:03:19.863 CXX test/cpp_headers/base64.o 00:03:19.863 CXX test/cpp_headers/barrier.o 00:03:19.863 CXX test/cpp_headers/bdev.o 00:03:19.863 CXX test/cpp_headers/bdev_module.o 00:03:19.863 CXX test/cpp_headers/bit_pool.o 00:03:19.863 CXX test/cpp_headers/bdev_zone.o 00:03:19.863 CXX test/cpp_headers/bit_array.o 00:03:19.863 CXX test/cpp_headers/blob_bdev.o 00:03:19.863 CXX test/cpp_headers/blobfs_bdev.o 00:03:19.863 CXX test/cpp_headers/conf.o 00:03:19.863 CXX test/cpp_headers/blobfs.o 00:03:19.863 CXX test/cpp_headers/blob.o 00:03:19.863 CXX test/cpp_headers/cpuset.o 00:03:19.863 CXX test/cpp_headers/config.o 00:03:19.863 CXX test/cpp_headers/crc32.o 00:03:19.863 CXX test/cpp_headers/crc16.o 00:03:19.863 CXX test/cpp_headers/crc64.o 00:03:19.863 CXX test/cpp_headers/dif.o 00:03:19.863 CXX test/cpp_headers/dma.o 00:03:19.863 CXX test/cpp_headers/endian.o 00:03:19.863 CXX test/cpp_headers/event.o 00:03:19.864 CXX test/cpp_headers/env.o 00:03:19.864 CXX test/cpp_headers/env_dpdk.o 00:03:19.864 CXX test/cpp_headers/fd_group.o 00:03:19.864 CXX test/cpp_headers/fd.o 00:03:19.864 CXX test/cpp_headers/file.o 00:03:19.864 CXX test/cpp_headers/ftl.o 00:03:19.864 CXX test/cpp_headers/gpt_spec.o 00:03:19.864 CXX test/cpp_headers/histogram_data.o 00:03:19.864 CXX test/cpp_headers/idxd_spec.o 00:03:19.864 CXX test/cpp_headers/idxd.o 00:03:19.864 CXX test/cpp_headers/hexlify.o 00:03:19.864 CXX test/cpp_headers/ioat_spec.o 00:03:19.864 CXX test/cpp_headers/ioat.o 00:03:19.864 CXX test/cpp_headers/iscsi_spec.o 00:03:19.864 CXX test/cpp_headers/init.o 00:03:19.864 CXX test/cpp_headers/json.o 00:03:19.864 CXX test/cpp_headers/keyring.o 00:03:19.864 CXX test/cpp_headers/keyring_module.o 00:03:19.864 CXX test/cpp_headers/jsonrpc.o 00:03:19.864 CXX test/cpp_headers/likely.o 00:03:19.864 CXX test/cpp_headers/log.o 00:03:19.864 CXX test/cpp_headers/memory.o 00:03:19.864 CXX test/cpp_headers/mmio.o 00:03:19.864 CXX test/cpp_headers/lvol.o 00:03:19.864 CXX test/cpp_headers/notify.o 00:03:19.864 CXX test/cpp_headers/nbd.o 00:03:19.864 CXX test/cpp_headers/nvme.o 00:03:19.864 CXX test/cpp_headers/nvme_ocssd.o 00:03:19.864 CXX test/cpp_headers/nvme_intel.o 00:03:19.864 CXX test/cpp_headers/net.o 00:03:19.864 CXX test/cpp_headers/nvme_zns.o 00:03:19.864 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:19.864 CXX test/cpp_headers/nvmf_cmd.o 00:03:19.864 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:19.864 CXX test/cpp_headers/nvme_spec.o 00:03:19.864 CXX test/cpp_headers/nvmf.o 00:03:19.864 CXX test/cpp_headers/nvmf_transport.o 00:03:19.864 CXX test/cpp_headers/nvmf_spec.o 00:03:19.864 CXX test/cpp_headers/opal.o 00:03:19.864 CXX test/cpp_headers/pipe.o 00:03:19.864 CXX test/cpp_headers/queue.o 00:03:19.864 CXX test/cpp_headers/pci_ids.o 00:03:19.864 CXX test/cpp_headers/opal_spec.o 00:03:19.864 CXX test/cpp_headers/rpc.o 00:03:19.864 CXX test/cpp_headers/reduce.o 00:03:19.864 CC test/app/jsoncat/jsoncat.o 00:03:19.864 LINK spdk_lspci 00:03:19.864 CXX test/cpp_headers/scsi.o 00:03:19.864 CXX test/cpp_headers/scheduler.o 00:03:19.864 CXX test/cpp_headers/scsi_spec.o 00:03:19.864 CXX test/cpp_headers/stdinc.o 00:03:19.864 CXX test/cpp_headers/sock.o 00:03:19.864 CXX test/cpp_headers/string.o 00:03:19.864 CXX test/cpp_headers/trace.o 00:03:19.864 CXX test/cpp_headers/thread.o 00:03:19.864 CXX test/cpp_headers/tree.o 00:03:19.864 CXX test/cpp_headers/trace_parser.o 00:03:19.864 CC test/app/stub/stub.o 00:03:19.864 CXX test/cpp_headers/ublk.o 00:03:19.864 CC test/thread/poller_perf/poller_perf.o 00:03:19.864 CXX test/cpp_headers/util.o 00:03:19.864 CXX test/cpp_headers/version.o 00:03:19.864 CXX test/cpp_headers/uuid.o 00:03:19.864 CXX test/cpp_headers/vfio_user_pci.o 00:03:19.864 CXX test/cpp_headers/vfio_user_spec.o 00:03:19.864 CXX test/cpp_headers/vhost.o 00:03:19.864 CXX test/cpp_headers/vmd.o 00:03:19.864 CXX test/cpp_headers/xor.o 00:03:19.864 CXX test/cpp_headers/zipf.o 00:03:19.864 CC examples/util/zipf/zipf.o 00:03:19.864 CC test/env/vtophys/vtophys.o 00:03:19.864 CC test/env/pci/pci_ut.o 00:03:19.864 CC examples/ioat/verify/verify.o 00:03:19.864 CC test/app/histogram_perf/histogram_perf.o 00:03:20.148 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:20.148 CC app/fio/nvme/fio_plugin.o 00:03:20.148 CC examples/ioat/perf/perf.o 00:03:20.148 CC test/env/memory/memory_ut.o 00:03:20.148 CC test/app/bdev_svc/bdev_svc.o 00:03:20.148 LINK spdk_nvme_discover 00:03:20.148 CC test/dma/test_dma/test_dma.o 00:03:20.148 CC app/fio/bdev/fio_plugin.o 00:03:20.148 LINK rpc_client_test 00:03:20.148 LINK interrupt_tgt 00:03:20.148 LINK nvmf_tgt 00:03:20.148 LINK spdk_trace_record 00:03:20.148 LINK iscsi_tgt 00:03:20.148 LINK spdk_tgt 00:03:20.407 CC test/env/mem_callbacks/mem_callbacks.o 00:03:20.407 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:20.407 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:20.407 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:20.407 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:20.407 LINK histogram_perf 00:03:20.407 LINK zipf 00:03:20.407 LINK env_dpdk_post_init 00:03:20.666 LINK jsoncat 00:03:20.666 LINK stub 00:03:20.666 LINK spdk_trace 00:03:20.666 LINK spdk_dd 00:03:20.666 LINK vtophys 00:03:20.666 LINK poller_perf 00:03:20.666 LINK verify 00:03:20.666 LINK bdev_svc 00:03:20.666 LINK ioat_perf 00:03:20.666 LINK pci_ut 00:03:20.923 LINK spdk_top 00:03:20.923 CC examples/idxd/perf/perf.o 00:03:20.923 CC examples/vmd/led/led.o 00:03:20.923 LINK test_dma 00:03:20.923 CC examples/vmd/lsvmd/lsvmd.o 00:03:20.923 LINK spdk_nvme_identify 00:03:20.923 LINK nvme_fuzz 00:03:20.923 CC examples/sock/hello_world/hello_sock.o 00:03:20.923 LINK vhost_fuzz 00:03:20.923 CC app/vhost/vhost.o 00:03:20.923 LINK spdk_nvme 00:03:20.923 LINK spdk_nvme_perf 00:03:20.923 LINK spdk_bdev 00:03:20.923 CC examples/thread/thread/thread_ex.o 00:03:21.182 LINK lsvmd 00:03:21.182 LINK led 00:03:21.182 CC test/event/event_perf/event_perf.o 00:03:21.182 CC test/event/reactor_perf/reactor_perf.o 00:03:21.182 CC test/event/reactor/reactor.o 00:03:21.182 LINK mem_callbacks 00:03:21.182 CC test/event/app_repeat/app_repeat.o 00:03:21.182 CC test/event/scheduler/scheduler.o 00:03:21.182 LINK hello_sock 00:03:21.182 LINK vhost 00:03:21.182 LINK idxd_perf 00:03:21.182 LINK reactor_perf 00:03:21.442 LINK event_perf 00:03:21.442 LINK reactor 00:03:21.442 LINK thread 00:03:21.442 LINK app_repeat 00:03:21.442 LINK scheduler 00:03:21.442 CC test/nvme/boot_partition/boot_partition.o 00:03:21.442 CC test/nvme/aer/aer.o 00:03:21.442 CC test/nvme/sgl/sgl.o 00:03:21.442 CC test/nvme/e2edp/nvme_dp.o 00:03:21.442 CC test/nvme/reset/reset.o 00:03:21.442 CC test/nvme/reserve/reserve.o 00:03:21.442 CC test/nvme/compliance/nvme_compliance.o 00:03:21.442 CC test/nvme/startup/startup.o 00:03:21.442 CC test/nvme/err_injection/err_injection.o 00:03:21.442 CC test/nvme/simple_copy/simple_copy.o 00:03:21.442 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.442 CC test/nvme/cuse/cuse.o 00:03:21.442 CC test/nvme/fdp/fdp.o 00:03:21.442 CC test/nvme/overhead/overhead.o 00:03:21.442 CC test/nvme/connect_stress/connect_stress.o 00:03:21.442 CC test/nvme/fused_ordering/fused_ordering.o 00:03:21.442 CC test/accel/dif/dif.o 00:03:21.442 CC test/blobfs/mkfs/mkfs.o 00:03:21.442 LINK memory_ut 00:03:21.701 CC test/lvol/esnap/esnap.o 00:03:21.701 LINK boot_partition 00:03:21.701 CC examples/nvme/arbitration/arbitration.o 00:03:21.701 LINK startup 00:03:21.701 LINK connect_stress 00:03:21.701 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:21.701 CC examples/nvme/reconnect/reconnect.o 00:03:21.701 LINK err_injection 00:03:21.701 CC examples/nvme/hello_world/hello_world.o 00:03:21.701 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:21.701 LINK reserve 00:03:21.701 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:21.701 CC examples/nvme/abort/abort.o 00:03:21.701 CC examples/nvme/hotplug/hotplug.o 00:03:21.701 LINK doorbell_aers 00:03:21.701 LINK fused_ordering 00:03:21.701 LINK simple_copy 00:03:21.960 LINK sgl 00:03:21.960 LINK mkfs 00:03:21.960 LINK aer 00:03:21.960 LINK reset 00:03:21.960 LINK nvme_dp 00:03:21.960 LINK overhead 00:03:21.960 LINK nvme_compliance 00:03:21.960 LINK fdp 00:03:21.960 LINK iscsi_fuzz 00:03:21.960 CC examples/accel/perf/accel_perf.o 00:03:21.960 LINK pmr_persistence 00:03:21.960 LINK cmb_copy 00:03:21.960 CC examples/blob/cli/blobcli.o 00:03:21.960 LINK hello_world 00:03:21.960 CC examples/blob/hello_world/hello_blob.o 00:03:21.960 LINK dif 00:03:21.960 LINK hotplug 00:03:21.961 LINK arbitration 00:03:22.220 LINK reconnect 00:03:22.220 LINK abort 00:03:22.220 LINK nvme_manage 00:03:22.220 LINK hello_blob 00:03:22.481 LINK accel_perf 00:03:22.481 LINK blobcli 00:03:22.481 CC test/bdev/bdevio/bdevio.o 00:03:22.742 LINK cuse 00:03:23.001 CC examples/bdev/hello_world/hello_bdev.o 00:03:23.001 CC examples/bdev/bdevperf/bdevperf.o 00:03:23.001 LINK bdevio 00:03:23.262 LINK hello_bdev 00:03:23.523 LINK bdevperf 00:03:24.465 CC examples/nvmf/nvmf/nvmf.o 00:03:24.465 LINK nvmf 00:03:25.848 LINK esnap 00:03:26.108 00:03:26.108 real 0m51.207s 00:03:26.108 user 6m33.398s 00:03:26.108 sys 4m39.265s 00:03:26.108 18:59:32 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:26.108 18:59:32 make -- common/autotest_common.sh@10 -- $ set +x 00:03:26.108 ************************************ 00:03:26.108 END TEST make 00:03:26.108 ************************************ 00:03:26.108 18:59:32 -- common/autotest_common.sh@1142 -- $ return 0 00:03:26.108 18:59:32 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:26.108 18:59:32 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:26.108 18:59:32 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:26.108 18:59:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.108 18:59:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:26.108 18:59:32 -- pm/common@44 -- $ pid=1090069 00:03:26.108 18:59:32 -- pm/common@50 -- $ kill -TERM 1090069 00:03:26.108 18:59:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.108 18:59:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:26.108 18:59:32 -- pm/common@44 -- $ pid=1090070 00:03:26.108 18:59:32 -- pm/common@50 -- $ kill -TERM 1090070 00:03:26.108 18:59:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.108 18:59:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:26.108 18:59:32 -- pm/common@44 -- $ pid=1090072 00:03:26.108 18:59:32 -- pm/common@50 -- $ kill -TERM 1090072 00:03:26.108 18:59:32 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.108 18:59:32 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:26.108 18:59:32 -- pm/common@44 -- $ pid=1090092 00:03:26.108 18:59:32 -- pm/common@50 -- $ sudo -E kill -TERM 1090092 00:03:26.108 18:59:32 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:26.369 18:59:32 -- nvmf/common.sh@7 -- # uname -s 00:03:26.369 18:59:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:26.369 18:59:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:26.369 18:59:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:26.369 18:59:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:26.369 18:59:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:26.369 18:59:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:26.369 18:59:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:26.369 18:59:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:26.369 18:59:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:26.369 18:59:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:26.369 18:59:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:26.369 18:59:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:03:26.369 18:59:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:26.369 18:59:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:26.369 18:59:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:26.369 18:59:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:26.369 18:59:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:26.369 18:59:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:26.369 18:59:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:26.369 18:59:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:26.369 18:59:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.369 18:59:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.369 18:59:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.369 18:59:32 -- paths/export.sh@5 -- # export PATH 00:03:26.369 18:59:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:26.369 18:59:32 -- nvmf/common.sh@47 -- # : 0 00:03:26.369 18:59:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:26.369 18:59:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:26.369 18:59:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:26.369 18:59:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:26.369 18:59:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:26.369 18:59:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:26.369 18:59:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:26.369 18:59:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:26.369 18:59:32 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:26.369 18:59:32 -- spdk/autotest.sh@32 -- # uname -s 00:03:26.369 18:59:32 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:26.369 18:59:32 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:26.369 18:59:32 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:26.369 18:59:32 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:26.369 18:59:32 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:26.369 18:59:32 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:26.369 18:59:32 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:26.369 18:59:32 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:26.369 18:59:32 -- spdk/autotest.sh@48 -- # udevadm_pid=1153207 00:03:26.369 18:59:32 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:26.369 18:59:32 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:26.369 18:59:32 -- pm/common@17 -- # local monitor 00:03:26.369 18:59:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.369 18:59:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.369 18:59:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.369 18:59:32 -- pm/common@21 -- # date +%s 00:03:26.369 18:59:32 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:26.369 18:59:32 -- pm/common@21 -- # date +%s 00:03:26.369 18:59:32 -- pm/common@25 -- # sleep 1 00:03:26.369 18:59:32 -- pm/common@21 -- # date +%s 00:03:26.369 18:59:32 -- pm/common@21 -- # date +%s 00:03:26.369 18:59:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720803572 00:03:26.369 18:59:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720803572 00:03:26.369 18:59:32 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720803572 00:03:26.369 18:59:32 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720803572 00:03:26.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720803572_collect-vmstat.pm.log 00:03:26.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720803572_collect-cpu-load.pm.log 00:03:26.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720803572_collect-cpu-temp.pm.log 00:03:26.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720803572_collect-bmc-pm.bmc.pm.log 00:03:27.311 18:59:33 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:27.311 18:59:33 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:27.311 18:59:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:27.311 18:59:33 -- common/autotest_common.sh@10 -- # set +x 00:03:27.311 18:59:33 -- spdk/autotest.sh@59 -- # create_test_list 00:03:27.311 18:59:33 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:27.311 18:59:33 -- common/autotest_common.sh@10 -- # set +x 00:03:27.311 18:59:33 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:27.311 18:59:33 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:27.311 18:59:33 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:27.311 18:59:33 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:27.311 18:59:33 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:27.311 18:59:33 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:27.311 18:59:33 -- common/autotest_common.sh@1455 -- # uname 00:03:27.311 18:59:33 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:27.311 18:59:33 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:27.311 18:59:33 -- common/autotest_common.sh@1475 -- # uname 00:03:27.311 18:59:33 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:27.311 18:59:33 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:27.311 18:59:33 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:27.311 18:59:33 -- spdk/autotest.sh@72 -- # hash lcov 00:03:27.311 18:59:33 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:27.311 18:59:33 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:27.311 --rc lcov_branch_coverage=1 00:03:27.311 --rc lcov_function_coverage=1 00:03:27.311 --rc genhtml_branch_coverage=1 00:03:27.311 --rc genhtml_function_coverage=1 00:03:27.311 --rc genhtml_legend=1 00:03:27.311 --rc geninfo_all_blocks=1 00:03:27.311 ' 00:03:27.311 18:59:33 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:27.311 --rc lcov_branch_coverage=1 00:03:27.311 --rc lcov_function_coverage=1 00:03:27.311 --rc genhtml_branch_coverage=1 00:03:27.311 --rc genhtml_function_coverage=1 00:03:27.311 --rc genhtml_legend=1 00:03:27.311 --rc geninfo_all_blocks=1 00:03:27.311 ' 00:03:27.311 18:59:33 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:27.311 --rc lcov_branch_coverage=1 00:03:27.311 --rc lcov_function_coverage=1 00:03:27.311 --rc genhtml_branch_coverage=1 00:03:27.311 --rc genhtml_function_coverage=1 00:03:27.311 --rc genhtml_legend=1 00:03:27.311 --rc geninfo_all_blocks=1 00:03:27.311 --no-external' 00:03:27.311 18:59:33 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:27.311 --rc lcov_branch_coverage=1 00:03:27.311 --rc lcov_function_coverage=1 00:03:27.311 --rc genhtml_branch_coverage=1 00:03:27.311 --rc genhtml_function_coverage=1 00:03:27.311 --rc genhtml_legend=1 00:03:27.311 --rc geninfo_all_blocks=1 00:03:27.311 --no-external' 00:03:27.311 18:59:33 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:27.572 lcov: LCOV version 1.14 00:03:27.572 18:59:33 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:39.871 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:39.871 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:52.153 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:52.153 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:03:52.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:52.154 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:52.415 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:52.415 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:52.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:52.416 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:52.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:52.677 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:52.677 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:52.677 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:56.876 19:00:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:56.876 19:00:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:56.876 19:00:02 -- common/autotest_common.sh@10 -- # set +x 00:03:56.876 19:00:02 -- spdk/autotest.sh@91 -- # rm -f 00:03:56.876 19:00:02 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:00.172 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:65:00.0 (144d a80a): Already using the nvme driver 00:04:00.172 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:04:00.172 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:04:00.433 19:00:06 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:00.433 19:00:06 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:00.433 19:00:06 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:00.433 19:00:06 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:00.433 19:00:06 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:00.433 19:00:06 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:00.433 19:00:06 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:00.433 19:00:06 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:00.433 19:00:06 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:00.433 19:00:06 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:00.433 19:00:06 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:00.433 19:00:06 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:00.433 19:00:06 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:00.433 19:00:06 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:00.433 19:00:06 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:00.692 No valid GPT data, bailing 00:04:00.692 19:00:06 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:00.692 19:00:06 -- scripts/common.sh@391 -- # pt= 00:04:00.692 19:00:06 -- scripts/common.sh@392 -- # return 1 00:04:00.692 19:00:06 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:00.692 1+0 records in 00:04:00.692 1+0 records out 00:04:00.692 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00136924 s, 766 MB/s 00:04:00.692 19:00:06 -- spdk/autotest.sh@118 -- # sync 00:04:00.692 19:00:06 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:00.692 19:00:06 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:00.693 19:00:06 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:08.830 19:00:13 -- spdk/autotest.sh@124 -- # uname -s 00:04:08.830 19:00:13 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:08.830 19:00:13 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:08.830 19:00:13 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.830 19:00:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.830 19:00:13 -- common/autotest_common.sh@10 -- # set +x 00:04:08.830 ************************************ 00:04:08.830 START TEST setup.sh 00:04:08.830 ************************************ 00:04:08.830 19:00:13 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:08.830 * Looking for test storage... 00:04:08.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:08.830 19:00:14 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:08.830 19:00:14 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:08.830 19:00:14 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:08.830 19:00:14 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.830 19:00:14 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.830 19:00:14 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:08.830 ************************************ 00:04:08.830 START TEST acl 00:04:08.830 ************************************ 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:08.830 * Looking for test storage... 00:04:08.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:08.830 19:00:14 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.830 19:00:14 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.830 19:00:14 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:08.830 19:00:14 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:08.830 19:00:14 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:08.830 19:00:14 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:08.830 19:00:14 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:08.830 19:00:14 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:08.830 19:00:14 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.133 19:00:18 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:12.133 19:00:18 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:12.133 19:00:18 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.133 19:00:18 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:12.133 19:00:18 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.133 19:00:18 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:15.439 Hugepages 00:04:15.439 node hugesize free / total 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.439 00:04:15.439 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.439 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.440 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:15.702 19:00:21 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:15.702 19:00:21 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.702 19:00:21 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.702 19:00:21 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:15.702 ************************************ 00:04:15.702 START TEST denied 00:04:15.702 ************************************ 00:04:15.702 19:00:21 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:15.702 19:00:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:04:15.702 19:00:21 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:15.702 19:00:21 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:04:15.702 19:00:21 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:15.702 19:00:21 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.896 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.896 19:00:25 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:25.182 00:04:25.182 real 0m8.544s 00:04:25.182 user 0m2.896s 00:04:25.182 sys 0m4.916s 00:04:25.182 19:00:30 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.182 19:00:30 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:25.182 ************************************ 00:04:25.182 END TEST denied 00:04:25.182 ************************************ 00:04:25.182 19:00:30 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:25.182 19:00:30 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:25.182 19:00:30 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.182 19:00:30 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.182 19:00:30 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:25.182 ************************************ 00:04:25.182 START TEST allowed 00:04:25.182 ************************************ 00:04:25.182 19:00:30 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:25.182 19:00:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:04:25.182 19:00:30 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:25.182 19:00:30 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:04:25.182 19:00:30 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.182 19:00:30 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.426 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:29.426 19:00:35 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:29.426 19:00:35 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:29.426 19:00:35 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:29.426 19:00:35 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.426 19:00:35 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.637 00:04:33.637 real 0m8.802s 00:04:33.637 user 0m2.276s 00:04:33.637 sys 0m4.618s 00:04:33.637 19:00:39 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.637 19:00:39 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:33.637 ************************************ 00:04:33.637 END TEST allowed 00:04:33.637 ************************************ 00:04:33.637 19:00:39 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:33.637 00:04:33.637 real 0m25.133s 00:04:33.637 user 0m8.085s 00:04:33.637 sys 0m14.594s 00:04:33.637 19:00:39 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:33.637 19:00:39 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.637 ************************************ 00:04:33.637 END TEST acl 00:04:33.637 ************************************ 00:04:33.637 19:00:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:33.637 19:00:39 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:33.637 19:00:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.637 19:00:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.637 19:00:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:33.637 ************************************ 00:04:33.637 START TEST hugepages 00:04:33.637 ************************************ 00:04:33.637 19:00:39 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:33.637 * Looking for test storage... 00:04:33.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 103301788 kB' 'MemAvailable: 106555180 kB' 'Buffers: 2704 kB' 'Cached: 14347864 kB' 'SwapCached: 0 kB' 'Active: 11376840 kB' 'Inactive: 3514444 kB' 'Active(anon): 10966024 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 544232 kB' 'Mapped: 162732 kB' 'Shmem: 10425308 kB' 'KReclaimable: 304928 kB' 'Slab: 1141252 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836324 kB' 'KernelStack: 27344 kB' 'PageTables: 8656 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460892 kB' 'Committed_AS: 12553796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235348 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.637 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.638 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:33.639 19:00:39 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:33.639 19:00:39 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.639 19:00:39 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.639 19:00:39 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:33.639 ************************************ 00:04:33.639 START TEST default_setup 00:04:33.639 ************************************ 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.639 19:00:39 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:36.945 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:04:36.945 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.945 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105458416 kB' 'MemAvailable: 108711808 kB' 'Buffers: 2704 kB' 'Cached: 14347984 kB' 'SwapCached: 0 kB' 'Active: 11397328 kB' 'Inactive: 3514444 kB' 'Active(anon): 10986512 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 564396 kB' 'Mapped: 163524 kB' 'Shmem: 10425428 kB' 'KReclaimable: 304928 kB' 'Slab: 1139164 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 834236 kB' 'KernelStack: 27248 kB' 'PageTables: 8472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12577056 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235304 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.946 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105458416 kB' 'MemAvailable: 108711808 kB' 'Buffers: 2704 kB' 'Cached: 14347984 kB' 'SwapCached: 0 kB' 'Active: 11396600 kB' 'Inactive: 3514444 kB' 'Active(anon): 10985784 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 563688 kB' 'Mapped: 163528 kB' 'Shmem: 10425428 kB' 'KReclaimable: 304928 kB' 'Slab: 1139164 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 834236 kB' 'KernelStack: 27216 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12577076 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235288 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.947 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.948 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105461396 kB' 'MemAvailable: 108714788 kB' 'Buffers: 2704 kB' 'Cached: 14348020 kB' 'SwapCached: 0 kB' 'Active: 11391280 kB' 'Inactive: 3514444 kB' 'Active(anon): 10980464 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558376 kB' 'Mapped: 162960 kB' 'Shmem: 10425464 kB' 'KReclaimable: 304928 kB' 'Slab: 1139192 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 834264 kB' 'KernelStack: 27280 kB' 'PageTables: 8636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12571476 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235284 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:36.949 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.213 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.214 nr_hugepages=1024 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.214 resv_hugepages=0 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.214 surplus_hugepages=0 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.214 anon_hugepages=0 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105461052 kB' 'MemAvailable: 108714444 kB' 'Buffers: 2704 kB' 'Cached: 14348020 kB' 'SwapCached: 0 kB' 'Active: 11391356 kB' 'Inactive: 3514444 kB' 'Active(anon): 10980540 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558496 kB' 'Mapped: 162960 kB' 'Shmem: 10425464 kB' 'KReclaimable: 304928 kB' 'Slab: 1139192 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 834264 kB' 'KernelStack: 27296 kB' 'PageTables: 8684 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12571500 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235284 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.214 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.215 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53023312 kB' 'MemUsed: 12635696 kB' 'SwapCached: 0 kB' 'Active: 4636668 kB' 'Inactive: 3293724 kB' 'Active(anon): 4494000 kB' 'Inactive(anon): 0 kB' 'Active(file): 142668 kB' 'Inactive(file): 3293724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7664692 kB' 'Mapped: 66916 kB' 'AnonPages: 268924 kB' 'Shmem: 4228300 kB' 'KernelStack: 13560 kB' 'PageTables: 5020 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186352 kB' 'Slab: 682744 kB' 'SReclaimable: 186352 kB' 'SUnreclaim: 496392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.216 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:37.217 node0=1024 expecting 1024 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:37.217 00:04:37.217 real 0m3.659s 00:04:37.217 user 0m1.343s 00:04:37.217 sys 0m2.243s 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.217 19:00:43 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:37.217 ************************************ 00:04:37.217 END TEST default_setup 00:04:37.217 ************************************ 00:04:37.217 19:00:43 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:37.217 19:00:43 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:37.217 19:00:43 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.217 19:00:43 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.217 19:00:43 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.217 ************************************ 00:04:37.217 START TEST per_node_1G_alloc 00:04:37.217 ************************************ 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.217 19:00:43 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.519 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:40.519 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:40.519 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105449928 kB' 'MemAvailable: 108703320 kB' 'Buffers: 2704 kB' 'Cached: 14348156 kB' 'SwapCached: 0 kB' 'Active: 11390592 kB' 'Inactive: 3514444 kB' 'Active(anon): 10979776 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557368 kB' 'Mapped: 162056 kB' 'Shmem: 10425600 kB' 'KReclaimable: 304928 kB' 'Slab: 1139400 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 834472 kB' 'KernelStack: 27232 kB' 'PageTables: 8416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12559984 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.782 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.783 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105449160 kB' 'MemAvailable: 108702552 kB' 'Buffers: 2704 kB' 'Cached: 14348160 kB' 'SwapCached: 0 kB' 'Active: 11390524 kB' 'Inactive: 3514444 kB' 'Active(anon): 10979708 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557164 kB' 'Mapped: 161996 kB' 'Shmem: 10425604 kB' 'KReclaimable: 304928 kB' 'Slab: 1139352 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 834424 kB' 'KernelStack: 27264 kB' 'PageTables: 8512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12561244 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235460 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.050 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.051 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105449664 kB' 'MemAvailable: 108703056 kB' 'Buffers: 2704 kB' 'Cached: 14348160 kB' 'SwapCached: 0 kB' 'Active: 11390452 kB' 'Inactive: 3514444 kB' 'Active(anon): 10979636 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557356 kB' 'Mapped: 162000 kB' 'Shmem: 10425604 kB' 'KReclaimable: 304928 kB' 'Slab: 1139352 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 834424 kB' 'KernelStack: 27232 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12561508 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235444 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.052 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:41.053 nr_hugepages=1024 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.053 resv_hugepages=0 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.053 surplus_hugepages=0 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.053 anon_hugepages=0 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.053 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105451808 kB' 'MemAvailable: 108705200 kB' 'Buffers: 2704 kB' 'Cached: 14348200 kB' 'SwapCached: 0 kB' 'Active: 11390360 kB' 'Inactive: 3514444 kB' 'Active(anon): 10979544 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 557220 kB' 'Mapped: 162000 kB' 'Shmem: 10425644 kB' 'KReclaimable: 304928 kB' 'Slab: 1139352 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 834424 kB' 'KernelStack: 27328 kB' 'PageTables: 8336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12561524 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.054 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.055 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54069452 kB' 'MemUsed: 11589556 kB' 'SwapCached: 0 kB' 'Active: 4637236 kB' 'Inactive: 3293724 kB' 'Active(anon): 4494568 kB' 'Inactive(anon): 0 kB' 'Active(file): 142668 kB' 'Inactive(file): 3293724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7664856 kB' 'Mapped: 66440 kB' 'AnonPages: 269312 kB' 'Shmem: 4228464 kB' 'KernelStack: 13512 kB' 'PageTables: 4836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186352 kB' 'Slab: 683156 kB' 'SReclaimable: 186352 kB' 'SUnreclaim: 496804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.056 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51379348 kB' 'MemUsed: 9300524 kB' 'SwapCached: 0 kB' 'Active: 6753060 kB' 'Inactive: 220720 kB' 'Active(anon): 6484912 kB' 'Inactive(anon): 0 kB' 'Active(file): 268148 kB' 'Inactive(file): 220720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6686072 kB' 'Mapped: 95560 kB' 'AnonPages: 287732 kB' 'Shmem: 6197204 kB' 'KernelStack: 13848 kB' 'PageTables: 4076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118576 kB' 'Slab: 456228 kB' 'SReclaimable: 118576 kB' 'SUnreclaim: 337652 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.057 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.058 node0=512 expecting 512 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:41.058 node1=512 expecting 512 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:41.058 00:04:41.058 real 0m3.839s 00:04:41.058 user 0m1.489s 00:04:41.058 sys 0m2.407s 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.058 19:00:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.058 ************************************ 00:04:41.058 END TEST per_node_1G_alloc 00:04:41.058 ************************************ 00:04:41.058 19:00:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:41.058 19:00:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:41.058 19:00:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.058 19:00:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.058 19:00:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.058 ************************************ 00:04:41.058 START TEST even_2G_alloc 00:04:41.058 ************************************ 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.058 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.059 19:00:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.361 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:44.361 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:44.362 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:44.362 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105419048 kB' 'MemAvailable: 108672440 kB' 'Buffers: 2704 kB' 'Cached: 14348340 kB' 'SwapCached: 0 kB' 'Active: 11396616 kB' 'Inactive: 3514444 kB' 'Active(anon): 10985800 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562876 kB' 'Mapped: 162652 kB' 'Shmem: 10425784 kB' 'KReclaimable: 304928 kB' 'Slab: 1140168 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835240 kB' 'KernelStack: 27488 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12566848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235652 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.627 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105416040 kB' 'MemAvailable: 108669432 kB' 'Buffers: 2704 kB' 'Cached: 14348344 kB' 'SwapCached: 0 kB' 'Active: 11392432 kB' 'Inactive: 3514444 kB' 'Active(anon): 10981616 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558740 kB' 'Mapped: 162464 kB' 'Shmem: 10425788 kB' 'KReclaimable: 304928 kB' 'Slab: 1140128 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835200 kB' 'KernelStack: 27360 kB' 'PageTables: 8872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12563684 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.628 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105414084 kB' 'MemAvailable: 108667476 kB' 'Buffers: 2704 kB' 'Cached: 14348360 kB' 'SwapCached: 0 kB' 'Active: 11391592 kB' 'Inactive: 3514444 kB' 'Active(anon): 10980776 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558392 kB' 'Mapped: 162016 kB' 'Shmem: 10425804 kB' 'KReclaimable: 304928 kB' 'Slab: 1140152 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835224 kB' 'KernelStack: 27440 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12562096 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.629 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.630 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.631 nr_hugepages=1024 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.631 resv_hugepages=0 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.631 surplus_hugepages=0 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.631 anon_hugepages=0 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105415384 kB' 'MemAvailable: 108668776 kB' 'Buffers: 2704 kB' 'Cached: 14348384 kB' 'SwapCached: 0 kB' 'Active: 11392012 kB' 'Inactive: 3514444 kB' 'Active(anon): 10981196 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558780 kB' 'Mapped: 162016 kB' 'Shmem: 10425828 kB' 'KReclaimable: 304928 kB' 'Slab: 1140152 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835224 kB' 'KernelStack: 27440 kB' 'PageTables: 9176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12563728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.631 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54062292 kB' 'MemUsed: 11596716 kB' 'SwapCached: 0 kB' 'Active: 4636496 kB' 'Inactive: 3293724 kB' 'Active(anon): 4493828 kB' 'Inactive(anon): 0 kB' 'Active(file): 142668 kB' 'Inactive(file): 3293724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7665016 kB' 'Mapped: 66440 kB' 'AnonPages: 268524 kB' 'Shmem: 4228624 kB' 'KernelStack: 13512 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186352 kB' 'Slab: 683376 kB' 'SReclaimable: 186352 kB' 'SUnreclaim: 497024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.632 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.895 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51352648 kB' 'MemUsed: 9327224 kB' 'SwapCached: 0 kB' 'Active: 6754956 kB' 'Inactive: 220720 kB' 'Active(anon): 6486808 kB' 'Inactive(anon): 0 kB' 'Active(file): 268148 kB' 'Inactive(file): 220720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6686092 kB' 'Mapped: 95576 kB' 'AnonPages: 289604 kB' 'Shmem: 6197224 kB' 'KernelStack: 13880 kB' 'PageTables: 4040 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118576 kB' 'Slab: 456776 kB' 'SReclaimable: 118576 kB' 'SUnreclaim: 338200 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.896 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.897 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:44.898 node0=512 expecting 512 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:44.898 node1=512 expecting 512 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:44.898 00:04:44.898 real 0m3.656s 00:04:44.898 user 0m1.490s 00:04:44.898 sys 0m2.205s 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.898 19:00:50 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.898 ************************************ 00:04:44.898 END TEST even_2G_alloc 00:04:44.898 ************************************ 00:04:44.898 19:00:50 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:44.898 19:00:50 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:44.898 19:00:50 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.898 19:00:50 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.898 19:00:50 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.898 ************************************ 00:04:44.898 START TEST odd_alloc 00:04:44.898 ************************************ 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.898 19:00:50 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:48.198 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:48.198 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:48.198 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105433760 kB' 'MemAvailable: 108687152 kB' 'Buffers: 2704 kB' 'Cached: 14348516 kB' 'SwapCached: 0 kB' 'Active: 11392908 kB' 'Inactive: 3514444 kB' 'Active(anon): 10982092 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559308 kB' 'Mapped: 162088 kB' 'Shmem: 10425960 kB' 'KReclaimable: 304928 kB' 'Slab: 1140432 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835504 kB' 'KernelStack: 27344 kB' 'PageTables: 8740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12564744 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235796 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.462 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.463 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105435612 kB' 'MemAvailable: 108689004 kB' 'Buffers: 2704 kB' 'Cached: 14348520 kB' 'SwapCached: 0 kB' 'Active: 11392436 kB' 'Inactive: 3514444 kB' 'Active(anon): 10981620 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558856 kB' 'Mapped: 162024 kB' 'Shmem: 10425964 kB' 'KReclaimable: 304928 kB' 'Slab: 1140412 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835484 kB' 'KernelStack: 27168 kB' 'PageTables: 8180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12561912 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.464 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105435576 kB' 'MemAvailable: 108688968 kB' 'Buffers: 2704 kB' 'Cached: 14348536 kB' 'SwapCached: 0 kB' 'Active: 11392128 kB' 'Inactive: 3514444 kB' 'Active(anon): 10981312 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558552 kB' 'Mapped: 162028 kB' 'Shmem: 10425980 kB' 'KReclaimable: 304928 kB' 'Slab: 1140368 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835440 kB' 'KernelStack: 27216 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12561932 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.465 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.466 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:48.467 nr_hugepages=1025 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:48.467 resv_hugepages=0 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:48.467 surplus_hugepages=0 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:48.467 anon_hugepages=0 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105435576 kB' 'MemAvailable: 108688968 kB' 'Buffers: 2704 kB' 'Cached: 14348556 kB' 'SwapCached: 0 kB' 'Active: 11392056 kB' 'Inactive: 3514444 kB' 'Active(anon): 10981240 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 558472 kB' 'Mapped: 162028 kB' 'Shmem: 10426000 kB' 'KReclaimable: 304928 kB' 'Slab: 1140368 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835440 kB' 'KernelStack: 27232 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508444 kB' 'Committed_AS: 12561952 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235604 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.467 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.468 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.731 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54053548 kB' 'MemUsed: 11605460 kB' 'SwapCached: 0 kB' 'Active: 4636952 kB' 'Inactive: 3293724 kB' 'Active(anon): 4494284 kB' 'Inactive(anon): 0 kB' 'Active(file): 142668 kB' 'Inactive(file): 3293724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7665156 kB' 'Mapped: 66440 kB' 'AnonPages: 268672 kB' 'Shmem: 4228764 kB' 'KernelStack: 13512 kB' 'PageTables: 4792 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186352 kB' 'Slab: 683428 kB' 'SReclaimable: 186352 kB' 'SUnreclaim: 497076 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.732 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 51381932 kB' 'MemUsed: 9297940 kB' 'SwapCached: 0 kB' 'Active: 6755232 kB' 'Inactive: 220720 kB' 'Active(anon): 6487084 kB' 'Inactive(anon): 0 kB' 'Active(file): 268148 kB' 'Inactive(file): 220720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6686128 kB' 'Mapped: 95588 kB' 'AnonPages: 289920 kB' 'Shmem: 6197260 kB' 'KernelStack: 13720 kB' 'PageTables: 3612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118576 kB' 'Slab: 456940 kB' 'SReclaimable: 118576 kB' 'SUnreclaim: 338364 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.733 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.734 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:48.735 node0=512 expecting 513 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:48.735 node1=513 expecting 512 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:48.735 00:04:48.735 real 0m3.780s 00:04:48.735 user 0m1.481s 00:04:48.735 sys 0m2.359s 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.735 19:00:54 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:48.735 ************************************ 00:04:48.735 END TEST odd_alloc 00:04:48.735 ************************************ 00:04:48.735 19:00:54 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:48.735 19:00:54 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:48.735 19:00:54 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.735 19:00:54 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.735 19:00:54 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.735 ************************************ 00:04:48.735 START TEST custom_alloc 00:04:48.735 ************************************ 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.735 19:00:54 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.039 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:52.039 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104408848 kB' 'MemAvailable: 107662240 kB' 'Buffers: 2704 kB' 'Cached: 14348692 kB' 'SwapCached: 0 kB' 'Active: 11393584 kB' 'Inactive: 3514444 kB' 'Active(anon): 10982768 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559556 kB' 'Mapped: 162184 kB' 'Shmem: 10426136 kB' 'KReclaimable: 304928 kB' 'Slab: 1140800 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835872 kB' 'KernelStack: 27312 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12565864 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:57 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.039 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.040 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104409196 kB' 'MemAvailable: 107662588 kB' 'Buffers: 2704 kB' 'Cached: 14348692 kB' 'SwapCached: 0 kB' 'Active: 11393672 kB' 'Inactive: 3514444 kB' 'Active(anon): 10982856 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559596 kB' 'Mapped: 162124 kB' 'Shmem: 10426136 kB' 'KReclaimable: 304928 kB' 'Slab: 1140800 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835872 kB' 'KernelStack: 27248 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12562728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235492 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.041 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.042 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104407772 kB' 'MemAvailable: 107661164 kB' 'Buffers: 2704 kB' 'Cached: 14348712 kB' 'SwapCached: 0 kB' 'Active: 11393040 kB' 'Inactive: 3514444 kB' 'Active(anon): 10982224 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559384 kB' 'Mapped: 162048 kB' 'Shmem: 10426156 kB' 'KReclaimable: 304928 kB' 'Slab: 1140800 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835872 kB' 'KernelStack: 27264 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12562748 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.043 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.044 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.045 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:52.046 nr_hugepages=1536 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.046 resv_hugepages=0 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.046 surplus_hugepages=0 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.046 anon_hugepages=0 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 104407772 kB' 'MemAvailable: 107661164 kB' 'Buffers: 2704 kB' 'Cached: 14348736 kB' 'SwapCached: 0 kB' 'Active: 11393028 kB' 'Inactive: 3514444 kB' 'Active(anon): 10982212 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559388 kB' 'Mapped: 162048 kB' 'Shmem: 10426180 kB' 'KReclaimable: 304928 kB' 'Slab: 1140800 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 835872 kB' 'KernelStack: 27264 kB' 'PageTables: 8508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985180 kB' 'Committed_AS: 12562772 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235476 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.046 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.047 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 54060104 kB' 'MemUsed: 11598904 kB' 'SwapCached: 0 kB' 'Active: 4636580 kB' 'Inactive: 3293724 kB' 'Active(anon): 4493912 kB' 'Inactive(anon): 0 kB' 'Active(file): 142668 kB' 'Inactive(file): 3293724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7665216 kB' 'Mapped: 66440 kB' 'AnonPages: 268264 kB' 'Shmem: 4228824 kB' 'KernelStack: 13528 kB' 'PageTables: 4892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186352 kB' 'Slab: 683836 kB' 'SReclaimable: 186352 kB' 'SUnreclaim: 497484 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.048 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.049 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679872 kB' 'MemFree: 50348644 kB' 'MemUsed: 10331228 kB' 'SwapCached: 0 kB' 'Active: 6756496 kB' 'Inactive: 220720 kB' 'Active(anon): 6488348 kB' 'Inactive(anon): 0 kB' 'Active(file): 268148 kB' 'Inactive(file): 220720 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 6686264 kB' 'Mapped: 95608 kB' 'AnonPages: 291128 kB' 'Shmem: 6197396 kB' 'KernelStack: 13736 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118576 kB' 'Slab: 456964 kB' 'SReclaimable: 118576 kB' 'SUnreclaim: 338388 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.050 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.313 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:52.314 node0=512 expecting 512 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:52.314 node1=1024 expecting 1024 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:52.314 00:04:52.314 real 0m3.437s 00:04:52.314 user 0m1.301s 00:04:52.314 sys 0m2.127s 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.314 19:00:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.314 ************************************ 00:04:52.314 END TEST custom_alloc 00:04:52.314 ************************************ 00:04:52.314 19:00:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.314 19:00:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:52.314 19:00:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.314 19:00:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.314 19:00:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.314 ************************************ 00:04:52.314 START TEST no_shrink_alloc 00:04:52.314 ************************************ 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.314 19:00:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.617 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:55.617 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:55.617 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105437224 kB' 'MemAvailable: 108690616 kB' 'Buffers: 2704 kB' 'Cached: 14348868 kB' 'SwapCached: 0 kB' 'Active: 11393980 kB' 'Inactive: 3514444 kB' 'Active(anon): 10983164 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560152 kB' 'Mapped: 162136 kB' 'Shmem: 10426312 kB' 'KReclaimable: 304928 kB' 'Slab: 1141172 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836244 kB' 'KernelStack: 27248 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12563692 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235668 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.882 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105438040 kB' 'MemAvailable: 108691432 kB' 'Buffers: 2704 kB' 'Cached: 14348872 kB' 'SwapCached: 0 kB' 'Active: 11394108 kB' 'Inactive: 3514444 kB' 'Active(anon): 10983292 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560324 kB' 'Mapped: 162076 kB' 'Shmem: 10426316 kB' 'KReclaimable: 304928 kB' 'Slab: 1141204 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836276 kB' 'KernelStack: 27248 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12563712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.883 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.884 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105437648 kB' 'MemAvailable: 108691040 kB' 'Buffers: 2704 kB' 'Cached: 14348888 kB' 'SwapCached: 0 kB' 'Active: 11394120 kB' 'Inactive: 3514444 kB' 'Active(anon): 10983304 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 560324 kB' 'Mapped: 162076 kB' 'Shmem: 10426332 kB' 'KReclaimable: 304928 kB' 'Slab: 1141204 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836276 kB' 'KernelStack: 27248 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12563732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.885 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:01 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.886 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:55.887 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:56.151 nr_hugepages=1024 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:56.151 resv_hugepages=0 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:56.151 surplus_hugepages=0 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:56.151 anon_hugepages=0 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105437208 kB' 'MemAvailable: 108690600 kB' 'Buffers: 2704 kB' 'Cached: 14348928 kB' 'SwapCached: 0 kB' 'Active: 11393796 kB' 'Inactive: 3514444 kB' 'Active(anon): 10982980 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 559936 kB' 'Mapped: 162076 kB' 'Shmem: 10426372 kB' 'KReclaimable: 304928 kB' 'Slab: 1141204 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836276 kB' 'KernelStack: 27232 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12563756 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235636 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.151 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.152 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53014688 kB' 'MemUsed: 12644320 kB' 'SwapCached: 0 kB' 'Active: 4636372 kB' 'Inactive: 3293724 kB' 'Active(anon): 4493704 kB' 'Inactive(anon): 0 kB' 'Active(file): 142668 kB' 'Inactive(file): 3293724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7665216 kB' 'Mapped: 66440 kB' 'AnonPages: 268116 kB' 'Shmem: 4228824 kB' 'KernelStack: 13512 kB' 'PageTables: 4844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186352 kB' 'Slab: 684092 kB' 'SReclaimable: 186352 kB' 'SUnreclaim: 497740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.153 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:56.154 node0=1024 expecting 1024 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.154 19:01:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.487 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:59.487 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:59.487 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:59.755 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105447884 kB' 'MemAvailable: 108701276 kB' 'Buffers: 2704 kB' 'Cached: 14349028 kB' 'SwapCached: 0 kB' 'Active: 11395688 kB' 'Inactive: 3514444 kB' 'Active(anon): 10984872 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561708 kB' 'Mapped: 162152 kB' 'Shmem: 10426472 kB' 'KReclaimable: 304928 kB' 'Slab: 1141152 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836224 kB' 'KernelStack: 27280 kB' 'PageTables: 8552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12564816 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235540 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.755 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.756 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105449240 kB' 'MemAvailable: 108702632 kB' 'Buffers: 2704 kB' 'Cached: 14349028 kB' 'SwapCached: 0 kB' 'Active: 11394988 kB' 'Inactive: 3514444 kB' 'Active(anon): 10984172 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561016 kB' 'Mapped: 162092 kB' 'Shmem: 10426472 kB' 'KReclaimable: 304928 kB' 'Slab: 1141164 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836236 kB' 'KernelStack: 27248 kB' 'PageTables: 8464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12564832 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235508 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.757 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.758 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105448988 kB' 'MemAvailable: 108702380 kB' 'Buffers: 2704 kB' 'Cached: 14349048 kB' 'SwapCached: 0 kB' 'Active: 11396760 kB' 'Inactive: 3514444 kB' 'Active(anon): 10985944 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 562764 kB' 'Mapped: 162092 kB' 'Shmem: 10426492 kB' 'KReclaimable: 304928 kB' 'Slab: 1141164 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836236 kB' 'KernelStack: 27280 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12606712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235556 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.759 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.760 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:59.761 nr_hugepages=1024 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.761 resv_hugepages=0 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.761 surplus_hugepages=0 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.761 anon_hugepages=0 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.761 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338880 kB' 'MemFree: 105450024 kB' 'MemAvailable: 108703416 kB' 'Buffers: 2704 kB' 'Cached: 14349068 kB' 'SwapCached: 0 kB' 'Active: 11395152 kB' 'Inactive: 3514444 kB' 'Active(anon): 10984336 kB' 'Inactive(anon): 0 kB' 'Active(file): 410816 kB' 'Inactive(file): 3514444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 561160 kB' 'Mapped: 162092 kB' 'Shmem: 10426512 kB' 'KReclaimable: 304928 kB' 'Slab: 1141164 kB' 'SReclaimable: 304928 kB' 'SUnreclaim: 836236 kB' 'KernelStack: 27216 kB' 'PageTables: 8344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509468 kB' 'Committed_AS: 12565872 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 235524 kB' 'VmallocChunk: 0 kB' 'Percpu: 124992 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4224372 kB' 'DirectMap2M: 29009920 kB' 'DirectMap1G: 102760448 kB' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.762 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.763 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 53026412 kB' 'MemUsed: 12632596 kB' 'SwapCached: 0 kB' 'Active: 4636880 kB' 'Inactive: 3293724 kB' 'Active(anon): 4494212 kB' 'Inactive(anon): 0 kB' 'Active(file): 142668 kB' 'Inactive(file): 3293724 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7665220 kB' 'Mapped: 66440 kB' 'AnonPages: 268608 kB' 'Shmem: 4228828 kB' 'KernelStack: 13512 kB' 'PageTables: 4936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 186352 kB' 'Slab: 683568 kB' 'SReclaimable: 186352 kB' 'SUnreclaim: 497216 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:59.764 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.097 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.098 node0=1024 expecting 1024 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.098 00:05:00.098 real 0m7.639s 00:05:00.098 user 0m3.039s 00:05:00.098 sys 0m4.675s 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.098 19:01:05 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:05:00.098 ************************************ 00:05:00.098 END TEST no_shrink_alloc 00:05:00.098 ************************************ 00:05:00.098 19:01:05 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:00.098 19:01:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:00.098 00:05:00.098 real 0m26.651s 00:05:00.098 user 0m10.396s 00:05:00.098 sys 0m16.438s 00:05:00.098 19:01:05 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:00.098 19:01:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:05:00.098 ************************************ 00:05:00.098 END TEST hugepages 00:05:00.098 ************************************ 00:05:00.098 19:01:05 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:00.098 19:01:05 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:00.098 19:01:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:00.098 19:01:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:00.098 19:01:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:00.098 ************************************ 00:05:00.098 START TEST driver 00:05:00.099 ************************************ 00:05:00.099 19:01:06 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:00.099 * Looking for test storage... 00:05:00.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:00.099 19:01:06 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:05:00.099 19:01:06 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.099 19:01:06 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.398 19:01:11 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:05.398 19:01:11 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:05.398 19:01:11 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.398 19:01:11 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:05.398 ************************************ 00:05:05.398 START TEST guess_driver 00:05:05.398 ************************************ 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 314 > 0 )) 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:05.398 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:05.398 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:05.398 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:05.398 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:05.398 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:05.398 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:05.398 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:05.398 Looking for driver=vfio-pci 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:05:05.398 19:01:11 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:08.703 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:08.964 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:08.965 19:01:14 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:05:08.965 19:01:14 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:08.965 19:01:14 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:14.257 00:05:14.257 real 0m8.667s 00:05:14.257 user 0m2.940s 00:05:14.257 sys 0m4.934s 00:05:14.257 19:01:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.257 19:01:19 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.257 ************************************ 00:05:14.257 END TEST guess_driver 00:05:14.257 ************************************ 00:05:14.257 19:01:19 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:05:14.257 00:05:14.257 real 0m13.730s 00:05:14.257 user 0m4.436s 00:05:14.257 sys 0m7.690s 00:05:14.257 19:01:19 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.257 19:01:19 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:05:14.257 ************************************ 00:05:14.257 END TEST driver 00:05:14.257 ************************************ 00:05:14.257 19:01:19 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:14.257 19:01:19 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:14.257 19:01:19 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:14.257 19:01:19 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.257 19:01:19 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:14.257 ************************************ 00:05:14.257 START TEST devices 00:05:14.257 ************************************ 00:05:14.257 19:01:19 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:14.257 * Looking for test storage... 00:05:14.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:14.258 19:01:19 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:14.258 19:01:19 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:05:14.258 19:01:19 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:14.258 19:01:19 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:18.470 19:01:23 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:18.470 19:01:23 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:18.470 No valid GPT data, bailing 00:05:18.470 19:01:23 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:18.470 19:01:23 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:18.470 19:01:23 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:18.470 19:01:23 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:18.470 19:01:23 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:18.470 19:01:23 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:18.470 19:01:23 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.470 19:01:23 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:18.470 ************************************ 00:05:18.470 START TEST nvme_mount 00:05:18.470 ************************************ 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:18.470 19:01:23 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:19.043 Creating new GPT entries in memory. 00:05:19.043 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:19.043 other utilities. 00:05:19.043 19:01:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:19.043 19:01:24 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.043 19:01:24 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.043 19:01:24 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.043 19:01:24 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:19.985 Creating new GPT entries in memory. 00:05:19.985 The operation has completed successfully. 00:05:19.985 19:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:19.985 19:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.985 19:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1193744 00:05:19.985 19:01:25 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.985 19:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:19.985 19:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.985 19:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:19.985 19:01:25 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.985 19:01:26 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:23.284 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.545 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.545 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.807 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:23.807 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:23.807 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.807 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.807 19:01:29 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:27.110 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.371 19:01:33 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:30.673 19:01:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.245 19:01:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:31.246 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:31.246 00:05:31.246 real 0m13.177s 00:05:31.246 user 0m4.026s 00:05:31.246 sys 0m7.008s 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:31.246 19:01:37 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:31.246 ************************************ 00:05:31.246 END TEST nvme_mount 00:05:31.246 ************************************ 00:05:31.246 19:01:37 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:31.246 19:01:37 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:31.246 19:01:37 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:31.246 19:01:37 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.246 19:01:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:31.246 ************************************ 00:05:31.246 START TEST dm_mount 00:05:31.246 ************************************ 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:31.246 19:01:37 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:32.188 Creating new GPT entries in memory. 00:05:32.188 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:32.188 other utilities. 00:05:32.188 19:01:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:32.188 19:01:38 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.188 19:01:38 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:32.188 19:01:38 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:32.188 19:01:38 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:33.131 Creating new GPT entries in memory. 00:05:33.131 The operation has completed successfully. 00:05:33.131 19:01:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:33.131 19:01:39 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.131 19:01:39 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:33.131 19:01:39 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:33.131 19:01:39 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:34.516 The operation has completed successfully. 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1198790 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:34.516 19:01:40 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:37.821 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:37.822 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:37.822 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:37.822 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:38.083 19:01:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:41.417 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:41.417 00:05:41.417 real 0m10.353s 00:05:41.417 user 0m2.737s 00:05:41.417 sys 0m4.670s 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.417 19:01:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:41.417 ************************************ 00:05:41.417 END TEST dm_mount 00:05:41.417 ************************************ 00:05:41.678 19:01:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:41.678 19:01:47 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:41.678 19:01:47 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:41.678 19:01:47 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:41.678 19:01:47 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.678 19:01:47 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:41.678 19:01:47 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:41.678 19:01:47 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:41.938 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:41.938 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:05:41.938 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:41.938 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:41.938 19:01:47 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:41.938 19:01:47 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:41.938 19:01:47 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:41.938 19:01:47 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:41.938 19:01:47 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:41.938 19:01:47 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:41.938 19:01:47 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:41.938 00:05:41.938 real 0m28.021s 00:05:41.938 user 0m8.305s 00:05:41.938 sys 0m14.491s 00:05:41.938 19:01:47 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.938 19:01:47 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:41.938 ************************************ 00:05:41.938 END TEST devices 00:05:41.938 ************************************ 00:05:41.938 19:01:47 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:41.938 00:05:41.938 real 1m33.953s 00:05:41.938 user 0m31.391s 00:05:41.938 sys 0m53.489s 00:05:41.938 19:01:47 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.938 19:01:47 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:41.938 ************************************ 00:05:41.938 END TEST setup.sh 00:05:41.938 ************************************ 00:05:41.938 19:01:47 -- common/autotest_common.sh@1142 -- # return 0 00:05:41.938 19:01:47 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:45.234 Hugepages 00:05:45.234 node hugesize free / total 00:05:45.234 node0 1048576kB 0 / 0 00:05:45.234 node0 2048kB 2048 / 2048 00:05:45.234 node1 1048576kB 0 / 0 00:05:45.234 node1 2048kB 0 / 0 00:05:45.234 00:05:45.234 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:45.234 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:45.234 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:45.234 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:45.234 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:45.234 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:45.234 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:45.234 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:45.234 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:45.234 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:45.234 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:45.234 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:45.234 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:45.234 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:45.234 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:45.234 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:45.234 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:45.234 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:45.234 19:01:51 -- spdk/autotest.sh@130 -- # uname -s 00:05:45.234 19:01:51 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:45.234 19:01:51 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:45.234 19:01:51 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:48.528 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:48.528 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:48.528 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:48.528 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:48.528 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:48.528 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:48.528 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:48.789 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:50.703 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:50.964 19:01:56 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:51.903 19:01:57 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:51.903 19:01:57 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:51.903 19:01:57 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:51.903 19:01:57 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:51.903 19:01:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:51.903 19:01:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:51.903 19:01:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:51.903 19:01:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:51.903 19:01:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:51.903 19:01:57 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:51.903 19:01:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:51.903 19:01:57 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:55.200 Waiting for block devices as requested 00:05:55.200 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:55.200 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:55.200 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:55.461 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:55.461 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:55.461 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:55.721 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:55.721 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:55.721 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:55.982 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:55.982 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:55.982 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:56.242 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:56.242 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:56.242 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:56.503 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:56.503 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:56.764 19:02:02 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:56.765 19:02:02 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:56.765 19:02:02 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:56.765 19:02:02 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:56.765 19:02:02 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:56.765 19:02:02 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:56.765 19:02:02 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:56.765 19:02:02 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:56.765 19:02:02 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:56.765 19:02:02 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:56.765 19:02:02 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:56.765 19:02:02 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:56.765 19:02:02 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:56.765 19:02:02 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:56.765 19:02:02 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:56.765 19:02:02 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:56.765 19:02:02 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:56.765 19:02:02 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:56.765 19:02:02 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:56.765 19:02:02 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:56.765 19:02:02 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:56.765 19:02:02 -- common/autotest_common.sh@1557 -- # continue 00:05:56.765 19:02:02 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:56.765 19:02:02 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.765 19:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.765 19:02:02 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:56.765 19:02:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:56.765 19:02:02 -- common/autotest_common.sh@10 -- # set +x 00:05:56.765 19:02:02 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:00.066 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:00.066 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:00.066 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:00.066 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:00.066 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:00.066 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:00.066 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:00.066 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:06:00.326 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:06:00.586 19:02:06 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:06:00.586 19:02:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:00.586 19:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.586 19:02:06 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:06:00.586 19:02:06 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:06:00.586 19:02:06 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:06:00.586 19:02:06 -- common/autotest_common.sh@1577 -- # bdfs=() 00:06:00.586 19:02:06 -- common/autotest_common.sh@1577 -- # local bdfs 00:06:00.586 19:02:06 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:06:00.586 19:02:06 -- common/autotest_common.sh@1513 -- # bdfs=() 00:06:00.586 19:02:06 -- common/autotest_common.sh@1513 -- # local bdfs 00:06:00.586 19:02:06 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:00.586 19:02:06 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:00.586 19:02:06 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:06:00.847 19:02:06 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:06:00.847 19:02:06 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:06:00.847 19:02:06 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:06:00.847 19:02:06 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:06:00.847 19:02:06 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:06:00.847 19:02:06 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:06:00.847 19:02:06 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:06:00.847 19:02:06 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:06:00.847 19:02:06 -- common/autotest_common.sh@1593 -- # return 0 00:06:00.847 19:02:06 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:06:00.847 19:02:06 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:06:00.847 19:02:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:00.847 19:02:06 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:06:00.847 19:02:06 -- spdk/autotest.sh@162 -- # timing_enter lib 00:06:00.847 19:02:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:00.847 19:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.847 19:02:06 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:06:00.847 19:02:06 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:00.847 19:02:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.847 19:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.847 19:02:06 -- common/autotest_common.sh@10 -- # set +x 00:06:00.847 ************************************ 00:06:00.847 START TEST env 00:06:00.847 ************************************ 00:06:00.847 19:02:06 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:00.847 * Looking for test storage... 00:06:00.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:00.847 19:02:06 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:00.847 19:02:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:00.847 19:02:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.847 19:02:06 env -- common/autotest_common.sh@10 -- # set +x 00:06:00.847 ************************************ 00:06:00.847 START TEST env_memory 00:06:00.847 ************************************ 00:06:00.847 19:02:06 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.107 00:06:01.107 00:06:01.107 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.107 http://cunit.sourceforge.net/ 00:06:01.107 00:06:01.107 00:06:01.107 Suite: memory 00:06:01.107 Test: alloc and free memory map ...[2024-07-12 19:02:07.019525] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:01.107 passed 00:06:01.107 Test: mem map translation ...[2024-07-12 19:02:07.045089] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:01.107 [2024-07-12 19:02:07.045118] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:01.107 [2024-07-12 19:02:07.045172] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:01.107 [2024-07-12 19:02:07.045182] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:01.107 passed 00:06:01.107 Test: mem map registration ...[2024-07-12 19:02:07.100471] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:01.107 [2024-07-12 19:02:07.100492] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:01.107 passed 00:06:01.107 Test: mem map adjacent registrations ...passed 00:06:01.107 00:06:01.107 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.107 suites 1 1 n/a 0 0 00:06:01.107 tests 4 4 4 0 0 00:06:01.107 asserts 152 152 152 0 n/a 00:06:01.107 00:06:01.107 Elapsed time = 0.193 seconds 00:06:01.107 00:06:01.107 real 0m0.208s 00:06:01.107 user 0m0.198s 00:06:01.107 sys 0m0.009s 00:06:01.107 19:02:07 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.107 19:02:07 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:01.107 ************************************ 00:06:01.107 END TEST env_memory 00:06:01.107 ************************************ 00:06:01.107 19:02:07 env -- common/autotest_common.sh@1142 -- # return 0 00:06:01.107 19:02:07 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:01.107 19:02:07 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:01.108 19:02:07 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.108 19:02:07 env -- common/autotest_common.sh@10 -- # set +x 00:06:01.368 ************************************ 00:06:01.368 START TEST env_vtophys 00:06:01.368 ************************************ 00:06:01.368 19:02:07 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:01.368 EAL: lib.eal log level changed from notice to debug 00:06:01.368 EAL: Detected lcore 0 as core 0 on socket 0 00:06:01.368 EAL: Detected lcore 1 as core 1 on socket 0 00:06:01.368 EAL: Detected lcore 2 as core 2 on socket 0 00:06:01.368 EAL: Detected lcore 3 as core 3 on socket 0 00:06:01.368 EAL: Detected lcore 4 as core 4 on socket 0 00:06:01.368 EAL: Detected lcore 5 as core 5 on socket 0 00:06:01.368 EAL: Detected lcore 6 as core 6 on socket 0 00:06:01.368 EAL: Detected lcore 7 as core 7 on socket 0 00:06:01.368 EAL: Detected lcore 8 as core 8 on socket 0 00:06:01.368 EAL: Detected lcore 9 as core 9 on socket 0 00:06:01.368 EAL: Detected lcore 10 as core 10 on socket 0 00:06:01.368 EAL: Detected lcore 11 as core 11 on socket 0 00:06:01.368 EAL: Detected lcore 12 as core 12 on socket 0 00:06:01.368 EAL: Detected lcore 13 as core 13 on socket 0 00:06:01.368 EAL: Detected lcore 14 as core 14 on socket 0 00:06:01.368 EAL: Detected lcore 15 as core 15 on socket 0 00:06:01.368 EAL: Detected lcore 16 as core 16 on socket 0 00:06:01.368 EAL: Detected lcore 17 as core 17 on socket 0 00:06:01.368 EAL: Detected lcore 18 as core 18 on socket 0 00:06:01.368 EAL: Detected lcore 19 as core 19 on socket 0 00:06:01.368 EAL: Detected lcore 20 as core 20 on socket 0 00:06:01.368 EAL: Detected lcore 21 as core 21 on socket 0 00:06:01.368 EAL: Detected lcore 22 as core 22 on socket 0 00:06:01.368 EAL: Detected lcore 23 as core 23 on socket 0 00:06:01.368 EAL: Detected lcore 24 as core 24 on socket 0 00:06:01.368 EAL: Detected lcore 25 as core 25 on socket 0 00:06:01.368 EAL: Detected lcore 26 as core 26 on socket 0 00:06:01.368 EAL: Detected lcore 27 as core 27 on socket 0 00:06:01.368 EAL: Detected lcore 28 as core 28 on socket 0 00:06:01.368 EAL: Detected lcore 29 as core 29 on socket 0 00:06:01.368 EAL: Detected lcore 30 as core 30 on socket 0 00:06:01.368 EAL: Detected lcore 31 as core 31 on socket 0 00:06:01.368 EAL: Detected lcore 32 as core 32 on socket 0 00:06:01.368 EAL: Detected lcore 33 as core 33 on socket 0 00:06:01.368 EAL: Detected lcore 34 as core 34 on socket 0 00:06:01.368 EAL: Detected lcore 35 as core 35 on socket 0 00:06:01.368 EAL: Detected lcore 36 as core 0 on socket 1 00:06:01.368 EAL: Detected lcore 37 as core 1 on socket 1 00:06:01.368 EAL: Detected lcore 38 as core 2 on socket 1 00:06:01.368 EAL: Detected lcore 39 as core 3 on socket 1 00:06:01.368 EAL: Detected lcore 40 as core 4 on socket 1 00:06:01.368 EAL: Detected lcore 41 as core 5 on socket 1 00:06:01.368 EAL: Detected lcore 42 as core 6 on socket 1 00:06:01.368 EAL: Detected lcore 43 as core 7 on socket 1 00:06:01.368 EAL: Detected lcore 44 as core 8 on socket 1 00:06:01.368 EAL: Detected lcore 45 as core 9 on socket 1 00:06:01.368 EAL: Detected lcore 46 as core 10 on socket 1 00:06:01.368 EAL: Detected lcore 47 as core 11 on socket 1 00:06:01.368 EAL: Detected lcore 48 as core 12 on socket 1 00:06:01.368 EAL: Detected lcore 49 as core 13 on socket 1 00:06:01.368 EAL: Detected lcore 50 as core 14 on socket 1 00:06:01.368 EAL: Detected lcore 51 as core 15 on socket 1 00:06:01.368 EAL: Detected lcore 52 as core 16 on socket 1 00:06:01.368 EAL: Detected lcore 53 as core 17 on socket 1 00:06:01.368 EAL: Detected lcore 54 as core 18 on socket 1 00:06:01.368 EAL: Detected lcore 55 as core 19 on socket 1 00:06:01.368 EAL: Detected lcore 56 as core 20 on socket 1 00:06:01.368 EAL: Detected lcore 57 as core 21 on socket 1 00:06:01.368 EAL: Detected lcore 58 as core 22 on socket 1 00:06:01.368 EAL: Detected lcore 59 as core 23 on socket 1 00:06:01.368 EAL: Detected lcore 60 as core 24 on socket 1 00:06:01.368 EAL: Detected lcore 61 as core 25 on socket 1 00:06:01.368 EAL: Detected lcore 62 as core 26 on socket 1 00:06:01.368 EAL: Detected lcore 63 as core 27 on socket 1 00:06:01.368 EAL: Detected lcore 64 as core 28 on socket 1 00:06:01.368 EAL: Detected lcore 65 as core 29 on socket 1 00:06:01.368 EAL: Detected lcore 66 as core 30 on socket 1 00:06:01.368 EAL: Detected lcore 67 as core 31 on socket 1 00:06:01.368 EAL: Detected lcore 68 as core 32 on socket 1 00:06:01.368 EAL: Detected lcore 69 as core 33 on socket 1 00:06:01.368 EAL: Detected lcore 70 as core 34 on socket 1 00:06:01.368 EAL: Detected lcore 71 as core 35 on socket 1 00:06:01.368 EAL: Detected lcore 72 as core 0 on socket 0 00:06:01.368 EAL: Detected lcore 73 as core 1 on socket 0 00:06:01.368 EAL: Detected lcore 74 as core 2 on socket 0 00:06:01.369 EAL: Detected lcore 75 as core 3 on socket 0 00:06:01.369 EAL: Detected lcore 76 as core 4 on socket 0 00:06:01.369 EAL: Detected lcore 77 as core 5 on socket 0 00:06:01.369 EAL: Detected lcore 78 as core 6 on socket 0 00:06:01.369 EAL: Detected lcore 79 as core 7 on socket 0 00:06:01.369 EAL: Detected lcore 80 as core 8 on socket 0 00:06:01.369 EAL: Detected lcore 81 as core 9 on socket 0 00:06:01.369 EAL: Detected lcore 82 as core 10 on socket 0 00:06:01.369 EAL: Detected lcore 83 as core 11 on socket 0 00:06:01.369 EAL: Detected lcore 84 as core 12 on socket 0 00:06:01.369 EAL: Detected lcore 85 as core 13 on socket 0 00:06:01.369 EAL: Detected lcore 86 as core 14 on socket 0 00:06:01.369 EAL: Detected lcore 87 as core 15 on socket 0 00:06:01.369 EAL: Detected lcore 88 as core 16 on socket 0 00:06:01.369 EAL: Detected lcore 89 as core 17 on socket 0 00:06:01.369 EAL: Detected lcore 90 as core 18 on socket 0 00:06:01.369 EAL: Detected lcore 91 as core 19 on socket 0 00:06:01.369 EAL: Detected lcore 92 as core 20 on socket 0 00:06:01.369 EAL: Detected lcore 93 as core 21 on socket 0 00:06:01.369 EAL: Detected lcore 94 as core 22 on socket 0 00:06:01.369 EAL: Detected lcore 95 as core 23 on socket 0 00:06:01.369 EAL: Detected lcore 96 as core 24 on socket 0 00:06:01.369 EAL: Detected lcore 97 as core 25 on socket 0 00:06:01.369 EAL: Detected lcore 98 as core 26 on socket 0 00:06:01.369 EAL: Detected lcore 99 as core 27 on socket 0 00:06:01.369 EAL: Detected lcore 100 as core 28 on socket 0 00:06:01.369 EAL: Detected lcore 101 as core 29 on socket 0 00:06:01.369 EAL: Detected lcore 102 as core 30 on socket 0 00:06:01.369 EAL: Detected lcore 103 as core 31 on socket 0 00:06:01.369 EAL: Detected lcore 104 as core 32 on socket 0 00:06:01.369 EAL: Detected lcore 105 as core 33 on socket 0 00:06:01.369 EAL: Detected lcore 106 as core 34 on socket 0 00:06:01.369 EAL: Detected lcore 107 as core 35 on socket 0 00:06:01.369 EAL: Detected lcore 108 as core 0 on socket 1 00:06:01.369 EAL: Detected lcore 109 as core 1 on socket 1 00:06:01.369 EAL: Detected lcore 110 as core 2 on socket 1 00:06:01.369 EAL: Detected lcore 111 as core 3 on socket 1 00:06:01.369 EAL: Detected lcore 112 as core 4 on socket 1 00:06:01.369 EAL: Detected lcore 113 as core 5 on socket 1 00:06:01.369 EAL: Detected lcore 114 as core 6 on socket 1 00:06:01.369 EAL: Detected lcore 115 as core 7 on socket 1 00:06:01.369 EAL: Detected lcore 116 as core 8 on socket 1 00:06:01.369 EAL: Detected lcore 117 as core 9 on socket 1 00:06:01.369 EAL: Detected lcore 118 as core 10 on socket 1 00:06:01.369 EAL: Detected lcore 119 as core 11 on socket 1 00:06:01.369 EAL: Detected lcore 120 as core 12 on socket 1 00:06:01.369 EAL: Detected lcore 121 as core 13 on socket 1 00:06:01.369 EAL: Detected lcore 122 as core 14 on socket 1 00:06:01.369 EAL: Detected lcore 123 as core 15 on socket 1 00:06:01.369 EAL: Detected lcore 124 as core 16 on socket 1 00:06:01.369 EAL: Detected lcore 125 as core 17 on socket 1 00:06:01.369 EAL: Detected lcore 126 as core 18 on socket 1 00:06:01.369 EAL: Detected lcore 127 as core 19 on socket 1 00:06:01.369 EAL: Skipped lcore 128 as core 20 on socket 1 00:06:01.369 EAL: Skipped lcore 129 as core 21 on socket 1 00:06:01.369 EAL: Skipped lcore 130 as core 22 on socket 1 00:06:01.369 EAL: Skipped lcore 131 as core 23 on socket 1 00:06:01.369 EAL: Skipped lcore 132 as core 24 on socket 1 00:06:01.369 EAL: Skipped lcore 133 as core 25 on socket 1 00:06:01.369 EAL: Skipped lcore 134 as core 26 on socket 1 00:06:01.369 EAL: Skipped lcore 135 as core 27 on socket 1 00:06:01.369 EAL: Skipped lcore 136 as core 28 on socket 1 00:06:01.369 EAL: Skipped lcore 137 as core 29 on socket 1 00:06:01.369 EAL: Skipped lcore 138 as core 30 on socket 1 00:06:01.369 EAL: Skipped lcore 139 as core 31 on socket 1 00:06:01.369 EAL: Skipped lcore 140 as core 32 on socket 1 00:06:01.369 EAL: Skipped lcore 141 as core 33 on socket 1 00:06:01.369 EAL: Skipped lcore 142 as core 34 on socket 1 00:06:01.369 EAL: Skipped lcore 143 as core 35 on socket 1 00:06:01.369 EAL: Maximum logical cores by configuration: 128 00:06:01.369 EAL: Detected CPU lcores: 128 00:06:01.369 EAL: Detected NUMA nodes: 2 00:06:01.369 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:01.369 EAL: Detected shared linkage of DPDK 00:06:01.369 EAL: No shared files mode enabled, IPC will be disabled 00:06:01.369 EAL: Bus pci wants IOVA as 'DC' 00:06:01.369 EAL: Buses did not request a specific IOVA mode. 00:06:01.369 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:01.369 EAL: Selected IOVA mode 'VA' 00:06:01.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.369 EAL: Probing VFIO support... 00:06:01.369 EAL: IOMMU type 1 (Type 1) is supported 00:06:01.369 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:01.369 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:01.369 EAL: VFIO support initialized 00:06:01.369 EAL: Ask a virtual area of 0x2e000 bytes 00:06:01.369 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:01.369 EAL: Setting up physically contiguous memory... 00:06:01.369 EAL: Setting maximum number of open files to 524288 00:06:01.369 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:01.369 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:01.369 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:01.369 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.369 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:01.369 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.369 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.369 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:01.369 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:01.369 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.369 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:01.369 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.369 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.369 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:01.369 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:01.369 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.369 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:01.369 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.369 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.369 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:01.369 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:01.369 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.369 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:01.369 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:01.369 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.369 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:01.369 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:01.369 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:01.369 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.369 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:01.369 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:01.369 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.369 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:01.369 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:01.369 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.369 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:01.369 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:01.369 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.369 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:01.369 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:01.369 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.369 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:01.369 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:01.369 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.369 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:01.369 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:01.369 EAL: Ask a virtual area of 0x61000 bytes 00:06:01.369 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:01.369 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:01.369 EAL: Ask a virtual area of 0x400000000 bytes 00:06:01.369 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:01.369 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:01.369 EAL: Hugepages will be freed exactly as allocated. 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: TSC frequency is ~2400000 KHz 00:06:01.369 EAL: Main lcore 0 is ready (tid=7f281214ea00;cpuset=[0]) 00:06:01.369 EAL: Trying to obtain current memory policy. 00:06:01.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.369 EAL: Restoring previous memory policy: 0 00:06:01.369 EAL: request: mp_malloc_sync 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: Heap on socket 0 was expanded by 2MB 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:01.369 EAL: Mem event callback 'spdk:(nil)' registered 00:06:01.369 00:06:01.369 00:06:01.369 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.369 http://cunit.sourceforge.net/ 00:06:01.369 00:06:01.369 00:06:01.369 Suite: components_suite 00:06:01.369 Test: vtophys_malloc_test ...passed 00:06:01.369 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:01.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.369 EAL: Restoring previous memory policy: 4 00:06:01.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.369 EAL: request: mp_malloc_sync 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: Heap on socket 0 was expanded by 4MB 00:06:01.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.369 EAL: request: mp_malloc_sync 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: Heap on socket 0 was shrunk by 4MB 00:06:01.369 EAL: Trying to obtain current memory policy. 00:06:01.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.369 EAL: Restoring previous memory policy: 4 00:06:01.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.369 EAL: request: mp_malloc_sync 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: Heap on socket 0 was expanded by 6MB 00:06:01.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.369 EAL: request: mp_malloc_sync 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: Heap on socket 0 was shrunk by 6MB 00:06:01.369 EAL: Trying to obtain current memory policy. 00:06:01.369 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.369 EAL: Restoring previous memory policy: 4 00:06:01.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.369 EAL: request: mp_malloc_sync 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: Heap on socket 0 was expanded by 10MB 00:06:01.369 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.369 EAL: request: mp_malloc_sync 00:06:01.369 EAL: No shared files mode enabled, IPC is disabled 00:06:01.369 EAL: Heap on socket 0 was shrunk by 10MB 00:06:01.370 EAL: Trying to obtain current memory policy. 00:06:01.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.370 EAL: Restoring previous memory policy: 4 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was expanded by 18MB 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was shrunk by 18MB 00:06:01.370 EAL: Trying to obtain current memory policy. 00:06:01.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.370 EAL: Restoring previous memory policy: 4 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was expanded by 34MB 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was shrunk by 34MB 00:06:01.370 EAL: Trying to obtain current memory policy. 00:06:01.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.370 EAL: Restoring previous memory policy: 4 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was expanded by 66MB 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was shrunk by 66MB 00:06:01.370 EAL: Trying to obtain current memory policy. 00:06:01.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.370 EAL: Restoring previous memory policy: 4 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was expanded by 130MB 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was shrunk by 130MB 00:06:01.370 EAL: Trying to obtain current memory policy. 00:06:01.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.370 EAL: Restoring previous memory policy: 4 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was expanded by 258MB 00:06:01.370 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.370 EAL: request: mp_malloc_sync 00:06:01.370 EAL: No shared files mode enabled, IPC is disabled 00:06:01.370 EAL: Heap on socket 0 was shrunk by 258MB 00:06:01.370 EAL: Trying to obtain current memory policy. 00:06:01.370 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.630 EAL: Restoring previous memory policy: 4 00:06:01.630 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.630 EAL: request: mp_malloc_sync 00:06:01.630 EAL: No shared files mode enabled, IPC is disabled 00:06:01.630 EAL: Heap on socket 0 was expanded by 514MB 00:06:01.630 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.630 EAL: request: mp_malloc_sync 00:06:01.630 EAL: No shared files mode enabled, IPC is disabled 00:06:01.630 EAL: Heap on socket 0 was shrunk by 514MB 00:06:01.630 EAL: Trying to obtain current memory policy. 00:06:01.630 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:01.890 EAL: Restoring previous memory policy: 4 00:06:01.890 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.890 EAL: request: mp_malloc_sync 00:06:01.890 EAL: No shared files mode enabled, IPC is disabled 00:06:01.890 EAL: Heap on socket 0 was expanded by 1026MB 00:06:01.890 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.890 EAL: request: mp_malloc_sync 00:06:01.890 EAL: No shared files mode enabled, IPC is disabled 00:06:01.890 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:01.890 passed 00:06:01.890 00:06:01.890 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.890 suites 1 1 n/a 0 0 00:06:01.890 tests 2 2 2 0 0 00:06:01.890 asserts 497 497 497 0 n/a 00:06:01.890 00:06:01.890 Elapsed time = 0.642 seconds 00:06:01.890 EAL: Calling mem event callback 'spdk:(nil)' 00:06:01.890 EAL: request: mp_malloc_sync 00:06:01.890 EAL: No shared files mode enabled, IPC is disabled 00:06:01.890 EAL: Heap on socket 0 was shrunk by 2MB 00:06:01.890 EAL: No shared files mode enabled, IPC is disabled 00:06:01.890 EAL: No shared files mode enabled, IPC is disabled 00:06:01.890 EAL: No shared files mode enabled, IPC is disabled 00:06:01.890 00:06:01.890 real 0m0.757s 00:06:01.890 user 0m0.401s 00:06:01.890 sys 0m0.330s 00:06:01.890 19:02:08 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.890 19:02:08 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:01.890 ************************************ 00:06:01.890 END TEST env_vtophys 00:06:01.890 ************************************ 00:06:02.150 19:02:08 env -- common/autotest_common.sh@1142 -- # return 0 00:06:02.150 19:02:08 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:02.150 19:02:08 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.150 19:02:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.150 19:02:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.150 ************************************ 00:06:02.150 START TEST env_pci 00:06:02.150 ************************************ 00:06:02.150 19:02:08 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:02.150 00:06:02.150 00:06:02.150 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.150 http://cunit.sourceforge.net/ 00:06:02.150 00:06:02.150 00:06:02.150 Suite: pci 00:06:02.150 Test: pci_hook ...[2024-07-12 19:02:08.105862] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1209976 has claimed it 00:06:02.150 EAL: Cannot find device (10000:00:01.0) 00:06:02.150 EAL: Failed to attach device on primary process 00:06:02.150 passed 00:06:02.150 00:06:02.150 Run Summary: Type Total Ran Passed Failed Inactive 00:06:02.150 suites 1 1 n/a 0 0 00:06:02.150 tests 1 1 1 0 0 00:06:02.150 asserts 25 25 25 0 n/a 00:06:02.150 00:06:02.150 Elapsed time = 0.028 seconds 00:06:02.150 00:06:02.150 real 0m0.048s 00:06:02.150 user 0m0.014s 00:06:02.150 sys 0m0.033s 00:06:02.150 19:02:08 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.150 19:02:08 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:02.150 ************************************ 00:06:02.150 END TEST env_pci 00:06:02.150 ************************************ 00:06:02.150 19:02:08 env -- common/autotest_common.sh@1142 -- # return 0 00:06:02.150 19:02:08 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:02.150 19:02:08 env -- env/env.sh@15 -- # uname 00:06:02.150 19:02:08 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:02.150 19:02:08 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:02.150 19:02:08 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:02.150 19:02:08 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:02.150 19:02:08 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.150 19:02:08 env -- common/autotest_common.sh@10 -- # set +x 00:06:02.150 ************************************ 00:06:02.150 START TEST env_dpdk_post_init 00:06:02.150 ************************************ 00:06:02.150 19:02:08 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:02.150 EAL: Detected CPU lcores: 128 00:06:02.150 EAL: Detected NUMA nodes: 2 00:06:02.150 EAL: Detected shared linkage of DPDK 00:06:02.150 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:02.150 EAL: Selected IOVA mode 'VA' 00:06:02.150 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.150 EAL: VFIO support initialized 00:06:02.150 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:02.409 EAL: Using IOMMU type 1 (Type 1) 00:06:02.409 EAL: Ignore mapping IO port bar(1) 00:06:02.668 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:06:02.668 EAL: Ignore mapping IO port bar(1) 00:06:02.668 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:06:02.929 EAL: Ignore mapping IO port bar(1) 00:06:02.929 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:06:03.189 EAL: Ignore mapping IO port bar(1) 00:06:03.189 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:06:03.449 EAL: Ignore mapping IO port bar(1) 00:06:03.449 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:06:03.449 EAL: Ignore mapping IO port bar(1) 00:06:03.709 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:06:03.709 EAL: Ignore mapping IO port bar(1) 00:06:03.968 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:06:03.968 EAL: Ignore mapping IO port bar(1) 00:06:04.229 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:06:04.229 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:06:04.490 EAL: Ignore mapping IO port bar(1) 00:06:04.490 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:06:04.749 EAL: Ignore mapping IO port bar(1) 00:06:04.750 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:06:05.009 EAL: Ignore mapping IO port bar(1) 00:06:05.009 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:06:05.009 EAL: Ignore mapping IO port bar(1) 00:06:05.270 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:06:05.270 EAL: Ignore mapping IO port bar(1) 00:06:05.531 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:06:05.531 EAL: Ignore mapping IO port bar(1) 00:06:05.792 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:06:05.792 EAL: Ignore mapping IO port bar(1) 00:06:05.792 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:06:06.052 EAL: Ignore mapping IO port bar(1) 00:06:06.052 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:06:06.052 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:06:06.052 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:06:06.313 Starting DPDK initialization... 00:06:06.313 Starting SPDK post initialization... 00:06:06.313 SPDK NVMe probe 00:06:06.313 Attaching to 0000:65:00.0 00:06:06.313 Attached to 0000:65:00.0 00:06:06.313 Cleaning up... 00:06:08.281 00:06:08.281 real 0m5.710s 00:06:08.281 user 0m0.178s 00:06:08.281 sys 0m0.077s 00:06:08.281 19:02:13 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.281 19:02:13 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:08.281 ************************************ 00:06:08.281 END TEST env_dpdk_post_init 00:06:08.281 ************************************ 00:06:08.281 19:02:13 env -- common/autotest_common.sh@1142 -- # return 0 00:06:08.281 19:02:13 env -- env/env.sh@26 -- # uname 00:06:08.281 19:02:13 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:08.281 19:02:13 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.281 19:02:13 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.281 19:02:13 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.281 19:02:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.281 ************************************ 00:06:08.281 START TEST env_mem_callbacks 00:06:08.281 ************************************ 00:06:08.281 19:02:14 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.281 EAL: Detected CPU lcores: 128 00:06:08.281 EAL: Detected NUMA nodes: 2 00:06:08.281 EAL: Detected shared linkage of DPDK 00:06:08.281 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.281 EAL: Selected IOVA mode 'VA' 00:06:08.281 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.281 EAL: VFIO support initialized 00:06:08.281 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.281 00:06:08.281 00:06:08.281 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.281 http://cunit.sourceforge.net/ 00:06:08.281 00:06:08.281 00:06:08.281 Suite: memory 00:06:08.281 Test: test ... 00:06:08.281 register 0x200000200000 2097152 00:06:08.281 malloc 3145728 00:06:08.281 register 0x200000400000 4194304 00:06:08.281 buf 0x200000500000 len 3145728 PASSED 00:06:08.281 malloc 64 00:06:08.281 buf 0x2000004fff40 len 64 PASSED 00:06:08.281 malloc 4194304 00:06:08.281 register 0x200000800000 6291456 00:06:08.281 buf 0x200000a00000 len 4194304 PASSED 00:06:08.281 free 0x200000500000 3145728 00:06:08.281 free 0x2000004fff40 64 00:06:08.281 unregister 0x200000400000 4194304 PASSED 00:06:08.281 free 0x200000a00000 4194304 00:06:08.281 unregister 0x200000800000 6291456 PASSED 00:06:08.281 malloc 8388608 00:06:08.281 register 0x200000400000 10485760 00:06:08.281 buf 0x200000600000 len 8388608 PASSED 00:06:08.281 free 0x200000600000 8388608 00:06:08.281 unregister 0x200000400000 10485760 PASSED 00:06:08.281 passed 00:06:08.281 00:06:08.281 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.281 suites 1 1 n/a 0 0 00:06:08.281 tests 1 1 1 0 0 00:06:08.281 asserts 15 15 15 0 n/a 00:06:08.281 00:06:08.281 Elapsed time = 0.007 seconds 00:06:08.281 00:06:08.281 real 0m0.064s 00:06:08.281 user 0m0.018s 00:06:08.281 sys 0m0.045s 00:06:08.282 19:02:14 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.282 19:02:14 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:08.282 ************************************ 00:06:08.282 END TEST env_mem_callbacks 00:06:08.282 ************************************ 00:06:08.282 19:02:14 env -- common/autotest_common.sh@1142 -- # return 0 00:06:08.282 00:06:08.282 real 0m7.280s 00:06:08.282 user 0m0.992s 00:06:08.282 sys 0m0.832s 00:06:08.282 19:02:14 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.282 19:02:14 env -- common/autotest_common.sh@10 -- # set +x 00:06:08.282 ************************************ 00:06:08.282 END TEST env 00:06:08.282 ************************************ 00:06:08.282 19:02:14 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.282 19:02:14 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.282 19:02:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.282 19:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.282 19:02:14 -- common/autotest_common.sh@10 -- # set +x 00:06:08.282 ************************************ 00:06:08.282 START TEST rpc 00:06:08.282 ************************************ 00:06:08.282 19:02:14 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:08.282 * Looking for test storage... 00:06:08.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:08.282 19:02:14 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1211425 00:06:08.282 19:02:14 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.282 19:02:14 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:08.282 19:02:14 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1211425 00:06:08.282 19:02:14 rpc -- common/autotest_common.sh@829 -- # '[' -z 1211425 ']' 00:06:08.282 19:02:14 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.282 19:02:14 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.282 19:02:14 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.282 19:02:14 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.282 19:02:14 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.282 [2024-07-12 19:02:14.356985] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:08.282 [2024-07-12 19:02:14.357038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211425 ] 00:06:08.282 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.542 [2024-07-12 19:02:14.416683] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.542 [2024-07-12 19:02:14.480924] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:08.542 [2024-07-12 19:02:14.480961] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1211425' to capture a snapshot of events at runtime. 00:06:08.542 [2024-07-12 19:02:14.480969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:08.542 [2024-07-12 19:02:14.480975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:08.542 [2024-07-12 19:02:14.480981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1211425 for offline analysis/debug. 00:06:08.542 [2024-07-12 19:02:14.481000] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.111 19:02:15 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.111 19:02:15 rpc -- common/autotest_common.sh@862 -- # return 0 00:06:09.111 19:02:15 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.111 19:02:15 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:09.111 19:02:15 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:09.111 19:02:15 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:09.111 19:02:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.111 19:02:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.111 19:02:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.111 ************************************ 00:06:09.111 START TEST rpc_integrity 00:06:09.111 ************************************ 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:09.111 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.111 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:09.111 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:09.111 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:09.111 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.111 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:09.111 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.111 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:09.371 { 00:06:09.371 "name": "Malloc0", 00:06:09.371 "aliases": [ 00:06:09.371 "42d40239-75f8-4db6-8deb-5c8bdb01cab1" 00:06:09.371 ], 00:06:09.371 "product_name": "Malloc disk", 00:06:09.371 "block_size": 512, 00:06:09.371 "num_blocks": 16384, 00:06:09.371 "uuid": "42d40239-75f8-4db6-8deb-5c8bdb01cab1", 00:06:09.371 "assigned_rate_limits": { 00:06:09.371 "rw_ios_per_sec": 0, 00:06:09.371 "rw_mbytes_per_sec": 0, 00:06:09.371 "r_mbytes_per_sec": 0, 00:06:09.371 "w_mbytes_per_sec": 0 00:06:09.371 }, 00:06:09.371 "claimed": false, 00:06:09.371 "zoned": false, 00:06:09.371 "supported_io_types": { 00:06:09.371 "read": true, 00:06:09.371 "write": true, 00:06:09.371 "unmap": true, 00:06:09.371 "flush": true, 00:06:09.371 "reset": true, 00:06:09.371 "nvme_admin": false, 00:06:09.371 "nvme_io": false, 00:06:09.371 "nvme_io_md": false, 00:06:09.371 "write_zeroes": true, 00:06:09.371 "zcopy": true, 00:06:09.371 "get_zone_info": false, 00:06:09.371 "zone_management": false, 00:06:09.371 "zone_append": false, 00:06:09.371 "compare": false, 00:06:09.371 "compare_and_write": false, 00:06:09.371 "abort": true, 00:06:09.371 "seek_hole": false, 00:06:09.371 "seek_data": false, 00:06:09.371 "copy": true, 00:06:09.371 "nvme_iov_md": false 00:06:09.371 }, 00:06:09.371 "memory_domains": [ 00:06:09.371 { 00:06:09.371 "dma_device_id": "system", 00:06:09.371 "dma_device_type": 1 00:06:09.371 }, 00:06:09.371 { 00:06:09.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.371 "dma_device_type": 2 00:06:09.371 } 00:06:09.371 ], 00:06:09.371 "driver_specific": {} 00:06:09.371 } 00:06:09.371 ]' 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.371 [2024-07-12 19:02:15.303057] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:09.371 [2024-07-12 19:02:15.303090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:09.371 [2024-07-12 19:02:15.303103] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15a4d80 00:06:09.371 [2024-07-12 19:02:15.303110] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:09.371 [2024-07-12 19:02:15.304460] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:09.371 [2024-07-12 19:02:15.304481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:09.371 Passthru0 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:09.371 { 00:06:09.371 "name": "Malloc0", 00:06:09.371 "aliases": [ 00:06:09.371 "42d40239-75f8-4db6-8deb-5c8bdb01cab1" 00:06:09.371 ], 00:06:09.371 "product_name": "Malloc disk", 00:06:09.371 "block_size": 512, 00:06:09.371 "num_blocks": 16384, 00:06:09.371 "uuid": "42d40239-75f8-4db6-8deb-5c8bdb01cab1", 00:06:09.371 "assigned_rate_limits": { 00:06:09.371 "rw_ios_per_sec": 0, 00:06:09.371 "rw_mbytes_per_sec": 0, 00:06:09.371 "r_mbytes_per_sec": 0, 00:06:09.371 "w_mbytes_per_sec": 0 00:06:09.371 }, 00:06:09.371 "claimed": true, 00:06:09.371 "claim_type": "exclusive_write", 00:06:09.371 "zoned": false, 00:06:09.371 "supported_io_types": { 00:06:09.371 "read": true, 00:06:09.371 "write": true, 00:06:09.371 "unmap": true, 00:06:09.371 "flush": true, 00:06:09.371 "reset": true, 00:06:09.371 "nvme_admin": false, 00:06:09.371 "nvme_io": false, 00:06:09.371 "nvme_io_md": false, 00:06:09.371 "write_zeroes": true, 00:06:09.371 "zcopy": true, 00:06:09.371 "get_zone_info": false, 00:06:09.371 "zone_management": false, 00:06:09.371 "zone_append": false, 00:06:09.371 "compare": false, 00:06:09.371 "compare_and_write": false, 00:06:09.371 "abort": true, 00:06:09.371 "seek_hole": false, 00:06:09.371 "seek_data": false, 00:06:09.371 "copy": true, 00:06:09.371 "nvme_iov_md": false 00:06:09.371 }, 00:06:09.371 "memory_domains": [ 00:06:09.371 { 00:06:09.371 "dma_device_id": "system", 00:06:09.371 "dma_device_type": 1 00:06:09.371 }, 00:06:09.371 { 00:06:09.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.371 "dma_device_type": 2 00:06:09.371 } 00:06:09.371 ], 00:06:09.371 "driver_specific": {} 00:06:09.371 }, 00:06:09.371 { 00:06:09.371 "name": "Passthru0", 00:06:09.371 "aliases": [ 00:06:09.371 "c360198d-2347-5f26-b358-89fbfcdf5360" 00:06:09.371 ], 00:06:09.371 "product_name": "passthru", 00:06:09.371 "block_size": 512, 00:06:09.371 "num_blocks": 16384, 00:06:09.371 "uuid": "c360198d-2347-5f26-b358-89fbfcdf5360", 00:06:09.371 "assigned_rate_limits": { 00:06:09.371 "rw_ios_per_sec": 0, 00:06:09.371 "rw_mbytes_per_sec": 0, 00:06:09.371 "r_mbytes_per_sec": 0, 00:06:09.371 "w_mbytes_per_sec": 0 00:06:09.371 }, 00:06:09.371 "claimed": false, 00:06:09.371 "zoned": false, 00:06:09.371 "supported_io_types": { 00:06:09.371 "read": true, 00:06:09.371 "write": true, 00:06:09.371 "unmap": true, 00:06:09.371 "flush": true, 00:06:09.371 "reset": true, 00:06:09.371 "nvme_admin": false, 00:06:09.371 "nvme_io": false, 00:06:09.371 "nvme_io_md": false, 00:06:09.371 "write_zeroes": true, 00:06:09.371 "zcopy": true, 00:06:09.371 "get_zone_info": false, 00:06:09.371 "zone_management": false, 00:06:09.371 "zone_append": false, 00:06:09.371 "compare": false, 00:06:09.371 "compare_and_write": false, 00:06:09.371 "abort": true, 00:06:09.371 "seek_hole": false, 00:06:09.371 "seek_data": false, 00:06:09.371 "copy": true, 00:06:09.371 "nvme_iov_md": false 00:06:09.371 }, 00:06:09.371 "memory_domains": [ 00:06:09.371 { 00:06:09.371 "dma_device_id": "system", 00:06:09.371 "dma_device_type": 1 00:06:09.371 }, 00:06:09.371 { 00:06:09.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.371 "dma_device_type": 2 00:06:09.371 } 00:06:09.371 ], 00:06:09.371 "driver_specific": { 00:06:09.371 "passthru": { 00:06:09.371 "name": "Passthru0", 00:06:09.371 "base_bdev_name": "Malloc0" 00:06:09.371 } 00:06:09.371 } 00:06:09.371 } 00:06:09.371 ]' 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:09.371 19:02:15 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:09.371 00:06:09.371 real 0m0.298s 00:06:09.371 user 0m0.196s 00:06:09.371 sys 0m0.041s 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.371 19:02:15 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:09.371 ************************************ 00:06:09.371 END TEST rpc_integrity 00:06:09.371 ************************************ 00:06:09.371 19:02:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:09.371 19:02:15 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:09.371 19:02:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.371 19:02:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.371 19:02:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.632 ************************************ 00:06:09.632 START TEST rpc_plugins 00:06:09.632 ************************************ 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:09.632 { 00:06:09.632 "name": "Malloc1", 00:06:09.632 "aliases": [ 00:06:09.632 "03c6cc58-f3ea-4775-88c7-8b8bd68c712c" 00:06:09.632 ], 00:06:09.632 "product_name": "Malloc disk", 00:06:09.632 "block_size": 4096, 00:06:09.632 "num_blocks": 256, 00:06:09.632 "uuid": "03c6cc58-f3ea-4775-88c7-8b8bd68c712c", 00:06:09.632 "assigned_rate_limits": { 00:06:09.632 "rw_ios_per_sec": 0, 00:06:09.632 "rw_mbytes_per_sec": 0, 00:06:09.632 "r_mbytes_per_sec": 0, 00:06:09.632 "w_mbytes_per_sec": 0 00:06:09.632 }, 00:06:09.632 "claimed": false, 00:06:09.632 "zoned": false, 00:06:09.632 "supported_io_types": { 00:06:09.632 "read": true, 00:06:09.632 "write": true, 00:06:09.632 "unmap": true, 00:06:09.632 "flush": true, 00:06:09.632 "reset": true, 00:06:09.632 "nvme_admin": false, 00:06:09.632 "nvme_io": false, 00:06:09.632 "nvme_io_md": false, 00:06:09.632 "write_zeroes": true, 00:06:09.632 "zcopy": true, 00:06:09.632 "get_zone_info": false, 00:06:09.632 "zone_management": false, 00:06:09.632 "zone_append": false, 00:06:09.632 "compare": false, 00:06:09.632 "compare_and_write": false, 00:06:09.632 "abort": true, 00:06:09.632 "seek_hole": false, 00:06:09.632 "seek_data": false, 00:06:09.632 "copy": true, 00:06:09.632 "nvme_iov_md": false 00:06:09.632 }, 00:06:09.632 "memory_domains": [ 00:06:09.632 { 00:06:09.632 "dma_device_id": "system", 00:06:09.632 "dma_device_type": 1 00:06:09.632 }, 00:06:09.632 { 00:06:09.632 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:09.632 "dma_device_type": 2 00:06:09.632 } 00:06:09.632 ], 00:06:09.632 "driver_specific": {} 00:06:09.632 } 00:06:09.632 ]' 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:09.632 19:02:15 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:09.632 00:06:09.632 real 0m0.146s 00:06:09.632 user 0m0.094s 00:06:09.632 sys 0m0.022s 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.632 19:02:15 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:09.632 ************************************ 00:06:09.632 END TEST rpc_plugins 00:06:09.633 ************************************ 00:06:09.633 19:02:15 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:09.633 19:02:15 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:09.633 19:02:15 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:09.633 19:02:15 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.633 19:02:15 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.633 ************************************ 00:06:09.633 START TEST rpc_trace_cmd_test 00:06:09.633 ************************************ 00:06:09.633 19:02:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:06:09.633 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:09.633 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:09.633 19:02:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:09.633 19:02:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:09.894 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1211425", 00:06:09.894 "tpoint_group_mask": "0x8", 00:06:09.894 "iscsi_conn": { 00:06:09.894 "mask": "0x2", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "scsi": { 00:06:09.894 "mask": "0x4", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "bdev": { 00:06:09.894 "mask": "0x8", 00:06:09.894 "tpoint_mask": "0xffffffffffffffff" 00:06:09.894 }, 00:06:09.894 "nvmf_rdma": { 00:06:09.894 "mask": "0x10", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "nvmf_tcp": { 00:06:09.894 "mask": "0x20", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "ftl": { 00:06:09.894 "mask": "0x40", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "blobfs": { 00:06:09.894 "mask": "0x80", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "dsa": { 00:06:09.894 "mask": "0x200", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "thread": { 00:06:09.894 "mask": "0x400", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "nvme_pcie": { 00:06:09.894 "mask": "0x800", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "iaa": { 00:06:09.894 "mask": "0x1000", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "nvme_tcp": { 00:06:09.894 "mask": "0x2000", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "bdev_nvme": { 00:06:09.894 "mask": "0x4000", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 }, 00:06:09.894 "sock": { 00:06:09.894 "mask": "0x8000", 00:06:09.894 "tpoint_mask": "0x0" 00:06:09.894 } 00:06:09.894 }' 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:09.894 19:02:15 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:09.894 19:02:16 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:09.894 00:06:09.894 real 0m0.248s 00:06:09.894 user 0m0.207s 00:06:09.894 sys 0m0.033s 00:06:09.894 19:02:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.894 19:02:16 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:09.894 ************************************ 00:06:09.894 END TEST rpc_trace_cmd_test 00:06:09.894 ************************************ 00:06:10.155 19:02:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.155 19:02:16 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:10.155 19:02:16 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:10.155 19:02:16 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:10.155 19:02:16 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.155 19:02:16 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.155 19:02:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.155 ************************************ 00:06:10.155 START TEST rpc_daemon_integrity 00:06:10.155 ************************************ 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.155 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:10.155 { 00:06:10.155 "name": "Malloc2", 00:06:10.155 "aliases": [ 00:06:10.155 "fd0b5d64-285c-4b1f-a018-091b20af1337" 00:06:10.155 ], 00:06:10.155 "product_name": "Malloc disk", 00:06:10.155 "block_size": 512, 00:06:10.155 "num_blocks": 16384, 00:06:10.155 "uuid": "fd0b5d64-285c-4b1f-a018-091b20af1337", 00:06:10.155 "assigned_rate_limits": { 00:06:10.155 "rw_ios_per_sec": 0, 00:06:10.155 "rw_mbytes_per_sec": 0, 00:06:10.155 "r_mbytes_per_sec": 0, 00:06:10.155 "w_mbytes_per_sec": 0 00:06:10.155 }, 00:06:10.155 "claimed": false, 00:06:10.155 "zoned": false, 00:06:10.155 "supported_io_types": { 00:06:10.155 "read": true, 00:06:10.155 "write": true, 00:06:10.155 "unmap": true, 00:06:10.155 "flush": true, 00:06:10.155 "reset": true, 00:06:10.155 "nvme_admin": false, 00:06:10.155 "nvme_io": false, 00:06:10.155 "nvme_io_md": false, 00:06:10.155 "write_zeroes": true, 00:06:10.155 "zcopy": true, 00:06:10.155 "get_zone_info": false, 00:06:10.155 "zone_management": false, 00:06:10.155 "zone_append": false, 00:06:10.155 "compare": false, 00:06:10.156 "compare_and_write": false, 00:06:10.156 "abort": true, 00:06:10.156 "seek_hole": false, 00:06:10.156 "seek_data": false, 00:06:10.156 "copy": true, 00:06:10.156 "nvme_iov_md": false 00:06:10.156 }, 00:06:10.156 "memory_domains": [ 00:06:10.156 { 00:06:10.156 "dma_device_id": "system", 00:06:10.156 "dma_device_type": 1 00:06:10.156 }, 00:06:10.156 { 00:06:10.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.156 "dma_device_type": 2 00:06:10.156 } 00:06:10.156 ], 00:06:10.156 "driver_specific": {} 00:06:10.156 } 00:06:10.156 ]' 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.156 [2024-07-12 19:02:16.221536] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:10.156 [2024-07-12 19:02:16.221564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.156 [2024-07-12 19:02:16.221578] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15a5a90 00:06:10.156 [2024-07-12 19:02:16.221585] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.156 [2024-07-12 19:02:16.222795] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.156 [2024-07-12 19:02:16.222817] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:10.156 Passthru0 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:10.156 { 00:06:10.156 "name": "Malloc2", 00:06:10.156 "aliases": [ 00:06:10.156 "fd0b5d64-285c-4b1f-a018-091b20af1337" 00:06:10.156 ], 00:06:10.156 "product_name": "Malloc disk", 00:06:10.156 "block_size": 512, 00:06:10.156 "num_blocks": 16384, 00:06:10.156 "uuid": "fd0b5d64-285c-4b1f-a018-091b20af1337", 00:06:10.156 "assigned_rate_limits": { 00:06:10.156 "rw_ios_per_sec": 0, 00:06:10.156 "rw_mbytes_per_sec": 0, 00:06:10.156 "r_mbytes_per_sec": 0, 00:06:10.156 "w_mbytes_per_sec": 0 00:06:10.156 }, 00:06:10.156 "claimed": true, 00:06:10.156 "claim_type": "exclusive_write", 00:06:10.156 "zoned": false, 00:06:10.156 "supported_io_types": { 00:06:10.156 "read": true, 00:06:10.156 "write": true, 00:06:10.156 "unmap": true, 00:06:10.156 "flush": true, 00:06:10.156 "reset": true, 00:06:10.156 "nvme_admin": false, 00:06:10.156 "nvme_io": false, 00:06:10.156 "nvme_io_md": false, 00:06:10.156 "write_zeroes": true, 00:06:10.156 "zcopy": true, 00:06:10.156 "get_zone_info": false, 00:06:10.156 "zone_management": false, 00:06:10.156 "zone_append": false, 00:06:10.156 "compare": false, 00:06:10.156 "compare_and_write": false, 00:06:10.156 "abort": true, 00:06:10.156 "seek_hole": false, 00:06:10.156 "seek_data": false, 00:06:10.156 "copy": true, 00:06:10.156 "nvme_iov_md": false 00:06:10.156 }, 00:06:10.156 "memory_domains": [ 00:06:10.156 { 00:06:10.156 "dma_device_id": "system", 00:06:10.156 "dma_device_type": 1 00:06:10.156 }, 00:06:10.156 { 00:06:10.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.156 "dma_device_type": 2 00:06:10.156 } 00:06:10.156 ], 00:06:10.156 "driver_specific": {} 00:06:10.156 }, 00:06:10.156 { 00:06:10.156 "name": "Passthru0", 00:06:10.156 "aliases": [ 00:06:10.156 "08461d06-f130-5546-87d7-4f8c012e6c10" 00:06:10.156 ], 00:06:10.156 "product_name": "passthru", 00:06:10.156 "block_size": 512, 00:06:10.156 "num_blocks": 16384, 00:06:10.156 "uuid": "08461d06-f130-5546-87d7-4f8c012e6c10", 00:06:10.156 "assigned_rate_limits": { 00:06:10.156 "rw_ios_per_sec": 0, 00:06:10.156 "rw_mbytes_per_sec": 0, 00:06:10.156 "r_mbytes_per_sec": 0, 00:06:10.156 "w_mbytes_per_sec": 0 00:06:10.156 }, 00:06:10.156 "claimed": false, 00:06:10.156 "zoned": false, 00:06:10.156 "supported_io_types": { 00:06:10.156 "read": true, 00:06:10.156 "write": true, 00:06:10.156 "unmap": true, 00:06:10.156 "flush": true, 00:06:10.156 "reset": true, 00:06:10.156 "nvme_admin": false, 00:06:10.156 "nvme_io": false, 00:06:10.156 "nvme_io_md": false, 00:06:10.156 "write_zeroes": true, 00:06:10.156 "zcopy": true, 00:06:10.156 "get_zone_info": false, 00:06:10.156 "zone_management": false, 00:06:10.156 "zone_append": false, 00:06:10.156 "compare": false, 00:06:10.156 "compare_and_write": false, 00:06:10.156 "abort": true, 00:06:10.156 "seek_hole": false, 00:06:10.156 "seek_data": false, 00:06:10.156 "copy": true, 00:06:10.156 "nvme_iov_md": false 00:06:10.156 }, 00:06:10.156 "memory_domains": [ 00:06:10.156 { 00:06:10.156 "dma_device_id": "system", 00:06:10.156 "dma_device_type": 1 00:06:10.156 }, 00:06:10.156 { 00:06:10.156 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.156 "dma_device_type": 2 00:06:10.156 } 00:06:10.156 ], 00:06:10.156 "driver_specific": { 00:06:10.156 "passthru": { 00:06:10.156 "name": "Passthru0", 00:06:10.156 "base_bdev_name": "Malloc2" 00:06:10.156 } 00:06:10.156 } 00:06:10.156 } 00:06:10.156 ]' 00:06:10.156 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:10.417 00:06:10.417 real 0m0.292s 00:06:10.417 user 0m0.199s 00:06:10.417 sys 0m0.034s 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.417 19:02:16 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:10.417 ************************************ 00:06:10.417 END TEST rpc_daemon_integrity 00:06:10.417 ************************************ 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:10.417 19:02:16 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:10.417 19:02:16 rpc -- rpc/rpc.sh@84 -- # killprocess 1211425 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@948 -- # '[' -z 1211425 ']' 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@952 -- # kill -0 1211425 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@953 -- # uname 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1211425 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1211425' 00:06:10.417 killing process with pid 1211425 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@967 -- # kill 1211425 00:06:10.417 19:02:16 rpc -- common/autotest_common.sh@972 -- # wait 1211425 00:06:10.677 00:06:10.677 real 0m2.472s 00:06:10.677 user 0m3.285s 00:06:10.677 sys 0m0.677s 00:06:10.677 19:02:16 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.677 19:02:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.677 ************************************ 00:06:10.677 END TEST rpc 00:06:10.677 ************************************ 00:06:10.677 19:02:16 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.677 19:02:16 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.677 19:02:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.677 19:02:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.677 19:02:16 -- common/autotest_common.sh@10 -- # set +x 00:06:10.677 ************************************ 00:06:10.677 START TEST skip_rpc 00:06:10.677 ************************************ 00:06:10.677 19:02:16 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:10.938 * Looking for test storage... 00:06:10.938 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:10.938 19:02:16 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:10.938 19:02:16 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:10.938 19:02:16 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:10.938 19:02:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.938 19:02:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.938 19:02:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.938 ************************************ 00:06:10.938 START TEST skip_rpc 00:06:10.938 ************************************ 00:06:10.938 19:02:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:06:10.938 19:02:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1211953 00:06:10.938 19:02:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:10.938 19:02:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:10.938 19:02:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:10.938 [2024-07-12 19:02:16.936337] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:10.938 [2024-07-12 19:02:16.936414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1211953 ] 00:06:10.938 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.938 [2024-07-12 19:02:17.004294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.199 [2024-07-12 19:02:17.079971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1211953 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1211953 ']' 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1211953 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1211953 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1211953' 00:06:16.485 killing process with pid 1211953 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1211953 00:06:16.485 19:02:21 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1211953 00:06:16.485 00:06:16.485 real 0m5.277s 00:06:16.485 user 0m5.061s 00:06:16.485 sys 0m0.242s 00:06:16.485 19:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.485 19:02:22 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.485 ************************************ 00:06:16.485 END TEST skip_rpc 00:06:16.485 ************************************ 00:06:16.485 19:02:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:16.485 19:02:22 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:16.485 19:02:22 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.485 19:02:22 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.485 19:02:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.485 ************************************ 00:06:16.485 START TEST skip_rpc_with_json 00:06:16.485 ************************************ 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1213091 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1213091 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1213091 ']' 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.485 19:02:22 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:16.485 [2024-07-12 19:02:22.290298] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:16.485 [2024-07-12 19:02:22.290358] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213091 ] 00:06:16.485 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.485 [2024-07-12 19:02:22.352977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.485 [2024-07-12 19:02:22.427500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.055 [2024-07-12 19:02:23.051955] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:17.055 request: 00:06:17.055 { 00:06:17.055 "trtype": "tcp", 00:06:17.055 "method": "nvmf_get_transports", 00:06:17.055 "req_id": 1 00:06:17.055 } 00:06:17.055 Got JSON-RPC error response 00:06:17.055 response: 00:06:17.055 { 00:06:17.055 "code": -19, 00:06:17.055 "message": "No such device" 00:06:17.055 } 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.055 [2024-07-12 19:02:23.064076] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.055 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:17.316 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.316 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:17.316 { 00:06:17.316 "subsystems": [ 00:06:17.316 { 00:06:17.316 "subsystem": "vfio_user_target", 00:06:17.316 "config": null 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "subsystem": "keyring", 00:06:17.316 "config": [] 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "subsystem": "iobuf", 00:06:17.316 "config": [ 00:06:17.316 { 00:06:17.316 "method": "iobuf_set_options", 00:06:17.316 "params": { 00:06:17.316 "small_pool_count": 8192, 00:06:17.316 "large_pool_count": 1024, 00:06:17.316 "small_bufsize": 8192, 00:06:17.316 "large_bufsize": 135168 00:06:17.316 } 00:06:17.316 } 00:06:17.316 ] 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "subsystem": "sock", 00:06:17.316 "config": [ 00:06:17.316 { 00:06:17.316 "method": "sock_set_default_impl", 00:06:17.316 "params": { 00:06:17.316 "impl_name": "posix" 00:06:17.316 } 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "method": "sock_impl_set_options", 00:06:17.316 "params": { 00:06:17.316 "impl_name": "ssl", 00:06:17.316 "recv_buf_size": 4096, 00:06:17.316 "send_buf_size": 4096, 00:06:17.316 "enable_recv_pipe": true, 00:06:17.316 "enable_quickack": false, 00:06:17.316 "enable_placement_id": 0, 00:06:17.316 "enable_zerocopy_send_server": true, 00:06:17.316 "enable_zerocopy_send_client": false, 00:06:17.316 "zerocopy_threshold": 0, 00:06:17.316 "tls_version": 0, 00:06:17.316 "enable_ktls": false 00:06:17.316 } 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "method": "sock_impl_set_options", 00:06:17.316 "params": { 00:06:17.316 "impl_name": "posix", 00:06:17.316 "recv_buf_size": 2097152, 00:06:17.316 "send_buf_size": 2097152, 00:06:17.316 "enable_recv_pipe": true, 00:06:17.316 "enable_quickack": false, 00:06:17.316 "enable_placement_id": 0, 00:06:17.316 "enable_zerocopy_send_server": true, 00:06:17.316 "enable_zerocopy_send_client": false, 00:06:17.316 "zerocopy_threshold": 0, 00:06:17.316 "tls_version": 0, 00:06:17.316 "enable_ktls": false 00:06:17.316 } 00:06:17.316 } 00:06:17.316 ] 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "subsystem": "vmd", 00:06:17.316 "config": [] 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "subsystem": "accel", 00:06:17.316 "config": [ 00:06:17.316 { 00:06:17.316 "method": "accel_set_options", 00:06:17.316 "params": { 00:06:17.316 "small_cache_size": 128, 00:06:17.316 "large_cache_size": 16, 00:06:17.316 "task_count": 2048, 00:06:17.316 "sequence_count": 2048, 00:06:17.316 "buf_count": 2048 00:06:17.316 } 00:06:17.316 } 00:06:17.316 ] 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "subsystem": "bdev", 00:06:17.316 "config": [ 00:06:17.316 { 00:06:17.316 "method": "bdev_set_options", 00:06:17.316 "params": { 00:06:17.316 "bdev_io_pool_size": 65535, 00:06:17.316 "bdev_io_cache_size": 256, 00:06:17.316 "bdev_auto_examine": true, 00:06:17.316 "iobuf_small_cache_size": 128, 00:06:17.316 "iobuf_large_cache_size": 16 00:06:17.316 } 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "method": "bdev_raid_set_options", 00:06:17.316 "params": { 00:06:17.316 "process_window_size_kb": 1024 00:06:17.316 } 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "method": "bdev_iscsi_set_options", 00:06:17.316 "params": { 00:06:17.316 "timeout_sec": 30 00:06:17.316 } 00:06:17.316 }, 00:06:17.316 { 00:06:17.316 "method": "bdev_nvme_set_options", 00:06:17.316 "params": { 00:06:17.316 "action_on_timeout": "none", 00:06:17.316 "timeout_us": 0, 00:06:17.316 "timeout_admin_us": 0, 00:06:17.316 "keep_alive_timeout_ms": 10000, 00:06:17.316 "arbitration_burst": 0, 00:06:17.316 "low_priority_weight": 0, 00:06:17.316 "medium_priority_weight": 0, 00:06:17.316 "high_priority_weight": 0, 00:06:17.316 "nvme_adminq_poll_period_us": 10000, 00:06:17.316 "nvme_ioq_poll_period_us": 0, 00:06:17.316 "io_queue_requests": 0, 00:06:17.316 "delay_cmd_submit": true, 00:06:17.316 "transport_retry_count": 4, 00:06:17.316 "bdev_retry_count": 3, 00:06:17.316 "transport_ack_timeout": 0, 00:06:17.316 "ctrlr_loss_timeout_sec": 0, 00:06:17.316 "reconnect_delay_sec": 0, 00:06:17.316 "fast_io_fail_timeout_sec": 0, 00:06:17.316 "disable_auto_failback": false, 00:06:17.316 "generate_uuids": false, 00:06:17.316 "transport_tos": 0, 00:06:17.316 "nvme_error_stat": false, 00:06:17.316 "rdma_srq_size": 0, 00:06:17.316 "io_path_stat": false, 00:06:17.317 "allow_accel_sequence": false, 00:06:17.317 "rdma_max_cq_size": 0, 00:06:17.317 "rdma_cm_event_timeout_ms": 0, 00:06:17.317 "dhchap_digests": [ 00:06:17.317 "sha256", 00:06:17.317 "sha384", 00:06:17.317 "sha512" 00:06:17.317 ], 00:06:17.317 "dhchap_dhgroups": [ 00:06:17.317 "null", 00:06:17.317 "ffdhe2048", 00:06:17.317 "ffdhe3072", 00:06:17.317 "ffdhe4096", 00:06:17.317 "ffdhe6144", 00:06:17.317 "ffdhe8192" 00:06:17.317 ] 00:06:17.317 } 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "method": "bdev_nvme_set_hotplug", 00:06:17.317 "params": { 00:06:17.317 "period_us": 100000, 00:06:17.317 "enable": false 00:06:17.317 } 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "method": "bdev_wait_for_examine" 00:06:17.317 } 00:06:17.317 ] 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "subsystem": "scsi", 00:06:17.317 "config": null 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "subsystem": "scheduler", 00:06:17.317 "config": [ 00:06:17.317 { 00:06:17.317 "method": "framework_set_scheduler", 00:06:17.317 "params": { 00:06:17.317 "name": "static" 00:06:17.317 } 00:06:17.317 } 00:06:17.317 ] 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "subsystem": "vhost_scsi", 00:06:17.317 "config": [] 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "subsystem": "vhost_blk", 00:06:17.317 "config": [] 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "subsystem": "ublk", 00:06:17.317 "config": [] 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "subsystem": "nbd", 00:06:17.317 "config": [] 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "subsystem": "nvmf", 00:06:17.317 "config": [ 00:06:17.317 { 00:06:17.317 "method": "nvmf_set_config", 00:06:17.317 "params": { 00:06:17.317 "discovery_filter": "match_any", 00:06:17.317 "admin_cmd_passthru": { 00:06:17.317 "identify_ctrlr": false 00:06:17.317 } 00:06:17.317 } 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "method": "nvmf_set_max_subsystems", 00:06:17.317 "params": { 00:06:17.317 "max_subsystems": 1024 00:06:17.317 } 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "method": "nvmf_set_crdt", 00:06:17.317 "params": { 00:06:17.317 "crdt1": 0, 00:06:17.317 "crdt2": 0, 00:06:17.317 "crdt3": 0 00:06:17.317 } 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "method": "nvmf_create_transport", 00:06:17.317 "params": { 00:06:17.317 "trtype": "TCP", 00:06:17.317 "max_queue_depth": 128, 00:06:17.317 "max_io_qpairs_per_ctrlr": 127, 00:06:17.317 "in_capsule_data_size": 4096, 00:06:17.317 "max_io_size": 131072, 00:06:17.317 "io_unit_size": 131072, 00:06:17.317 "max_aq_depth": 128, 00:06:17.317 "num_shared_buffers": 511, 00:06:17.317 "buf_cache_size": 4294967295, 00:06:17.317 "dif_insert_or_strip": false, 00:06:17.317 "zcopy": false, 00:06:17.317 "c2h_success": true, 00:06:17.317 "sock_priority": 0, 00:06:17.317 "abort_timeout_sec": 1, 00:06:17.317 "ack_timeout": 0, 00:06:17.317 "data_wr_pool_size": 0 00:06:17.317 } 00:06:17.317 } 00:06:17.317 ] 00:06:17.317 }, 00:06:17.317 { 00:06:17.317 "subsystem": "iscsi", 00:06:17.317 "config": [ 00:06:17.317 { 00:06:17.317 "method": "iscsi_set_options", 00:06:17.317 "params": { 00:06:17.317 "node_base": "iqn.2016-06.io.spdk", 00:06:17.317 "max_sessions": 128, 00:06:17.317 "max_connections_per_session": 2, 00:06:17.317 "max_queue_depth": 64, 00:06:17.317 "default_time2wait": 2, 00:06:17.317 "default_time2retain": 20, 00:06:17.317 "first_burst_length": 8192, 00:06:17.317 "immediate_data": true, 00:06:17.317 "allow_duplicated_isid": false, 00:06:17.317 "error_recovery_level": 0, 00:06:17.317 "nop_timeout": 60, 00:06:17.317 "nop_in_interval": 30, 00:06:17.317 "disable_chap": false, 00:06:17.317 "require_chap": false, 00:06:17.317 "mutual_chap": false, 00:06:17.317 "chap_group": 0, 00:06:17.317 "max_large_datain_per_connection": 64, 00:06:17.317 "max_r2t_per_connection": 4, 00:06:17.317 "pdu_pool_size": 36864, 00:06:17.317 "immediate_data_pool_size": 16384, 00:06:17.317 "data_out_pool_size": 2048 00:06:17.317 } 00:06:17.317 } 00:06:17.317 ] 00:06:17.317 } 00:06:17.317 ] 00:06:17.317 } 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1213091 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1213091 ']' 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1213091 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1213091 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1213091' 00:06:17.317 killing process with pid 1213091 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1213091 00:06:17.317 19:02:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1213091 00:06:17.579 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1213329 00:06:17.579 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:17.579 19:02:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1213329 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1213329 ']' 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1213329 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1213329 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1213329' 00:06:22.867 killing process with pid 1213329 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1213329 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1213329 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:22.867 00:06:22.867 real 0m6.535s 00:06:22.867 user 0m6.414s 00:06:22.867 sys 0m0.510s 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:22.867 ************************************ 00:06:22.867 END TEST skip_rpc_with_json 00:06:22.867 ************************************ 00:06:22.867 19:02:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:22.867 19:02:28 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:22.867 19:02:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.867 19:02:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.867 19:02:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.867 ************************************ 00:06:22.867 START TEST skip_rpc_with_delay 00:06:22.867 ************************************ 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.867 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:22.868 [2024-07-12 19:02:28.910816] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:22.868 [2024-07-12 19:02:28.910922] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:22.868 00:06:22.868 real 0m0.080s 00:06:22.868 user 0m0.050s 00:06:22.868 sys 0m0.029s 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.868 19:02:28 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:22.868 ************************************ 00:06:22.868 END TEST skip_rpc_with_delay 00:06:22.868 ************************************ 00:06:22.868 19:02:28 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:22.868 19:02:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:22.868 19:02:28 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:22.868 19:02:28 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:22.868 19:02:28 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:22.868 19:02:28 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.868 19:02:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.127 ************************************ 00:06:23.127 START TEST exit_on_failed_rpc_init 00:06:23.127 ************************************ 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1214557 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1214557 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1214557 ']' 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.127 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:23.127 [2024-07-12 19:02:29.068176] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:23.127 [2024-07-12 19:02:29.068237] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214557 ] 00:06:23.127 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.127 [2024-07-12 19:02:29.133580] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.127 [2024-07-12 19:02:29.208217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.067 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:24.068 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:24.068 19:02:29 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:24.068 [2024-07-12 19:02:29.904538] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:24.068 [2024-07-12 19:02:29.904592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214729 ] 00:06:24.068 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.068 [2024-07-12 19:02:29.980064] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.068 [2024-07-12 19:02:30.047961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:24.068 [2024-07-12 19:02:30.048026] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:24.068 [2024-07-12 19:02:30.048036] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:24.068 [2024-07-12 19:02:30.048043] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1214557 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1214557 ']' 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1214557 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1214557 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1214557' 00:06:24.068 killing process with pid 1214557 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1214557 00:06:24.068 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1214557 00:06:24.329 00:06:24.329 real 0m1.364s 00:06:24.329 user 0m1.616s 00:06:24.329 sys 0m0.369s 00:06:24.329 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.329 19:02:30 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:24.329 ************************************ 00:06:24.329 END TEST exit_on_failed_rpc_init 00:06:24.329 ************************************ 00:06:24.329 19:02:30 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:24.329 19:02:30 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:24.329 00:06:24.329 real 0m13.674s 00:06:24.329 user 0m13.283s 00:06:24.329 sys 0m1.450s 00:06:24.329 19:02:30 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.329 19:02:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.329 ************************************ 00:06:24.329 END TEST skip_rpc 00:06:24.329 ************************************ 00:06:24.329 19:02:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.329 19:02:30 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:24.329 19:02:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.329 19:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.329 19:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.590 ************************************ 00:06:24.590 START TEST rpc_client 00:06:24.590 ************************************ 00:06:24.590 19:02:30 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:24.590 * Looking for test storage... 00:06:24.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:24.590 19:02:30 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:24.590 OK 00:06:24.590 19:02:30 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:24.590 00:06:24.590 real 0m0.124s 00:06:24.590 user 0m0.056s 00:06:24.590 sys 0m0.076s 00:06:24.590 19:02:30 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.590 19:02:30 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:24.590 ************************************ 00:06:24.590 END TEST rpc_client 00:06:24.590 ************************************ 00:06:24.590 19:02:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:24.590 19:02:30 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:24.590 19:02:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.590 19:02:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.590 19:02:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.590 ************************************ 00:06:24.590 START TEST json_config 00:06:24.590 ************************************ 00:06:24.590 19:02:30 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:24.851 19:02:30 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:24.851 19:02:30 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:24.851 19:02:30 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:24.851 19:02:30 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:24.851 19:02:30 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.851 19:02:30 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.851 19:02:30 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.851 19:02:30 json_config -- paths/export.sh@5 -- # export PATH 00:06:24.851 19:02:30 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@47 -- # : 0 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:24.851 19:02:30 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:24.851 19:02:30 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:06:24.852 INFO: JSON configuration test init 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.852 19:02:30 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:06:24.852 19:02:30 json_config -- json_config/common.sh@9 -- # local app=target 00:06:24.852 19:02:30 json_config -- json_config/common.sh@10 -- # shift 00:06:24.852 19:02:30 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:24.852 19:02:30 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:24.852 19:02:30 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:24.852 19:02:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.852 19:02:30 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:24.852 19:02:30 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1215024 00:06:24.852 19:02:30 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:24.852 Waiting for target to run... 00:06:24.852 19:02:30 json_config -- json_config/common.sh@25 -- # waitforlisten 1215024 /var/tmp/spdk_tgt.sock 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@829 -- # '[' -z 1215024 ']' 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.852 19:02:30 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:24.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.852 19:02:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:24.852 [2024-07-12 19:02:30.873821] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:24.852 [2024-07-12 19:02:30.873898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215024 ] 00:06:24.852 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.424 [2024-07-12 19:02:31.295430] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.424 [2024-07-12 19:02:31.347362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.685 19:02:31 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.685 19:02:31 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:25.685 19:02:31 json_config -- json_config/common.sh@26 -- # echo '' 00:06:25.685 00:06:25.685 19:02:31 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:06:25.685 19:02:31 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:06:25.685 19:02:31 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:25.685 19:02:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.685 19:02:31 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:06:25.685 19:02:31 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:06:25.685 19:02:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:25.685 19:02:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:25.685 19:02:31 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:25.685 19:02:31 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:06:25.685 19:02:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:26.257 19:02:32 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:06:26.257 19:02:32 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:26.257 19:02:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:26.257 19:02:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.257 19:02:32 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:26.257 19:02:32 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:26.257 19:02:32 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:26.257 19:02:32 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:06:26.257 19:02:32 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:06:26.257 19:02:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@48 -- # local get_types 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:06:26.518 19:02:32 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:26.518 19:02:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@55 -- # return 0 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:06:26.518 19:02:32 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:26.518 19:02:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:26.518 19:02:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:26.518 MallocForNvmf0 00:06:26.518 19:02:32 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:26.518 19:02:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:26.780 MallocForNvmf1 00:06:26.780 19:02:32 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:26.780 19:02:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:27.039 [2024-07-12 19:02:32.927583] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:27.039 19:02:32 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:27.039 19:02:32 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:27.040 19:02:33 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:27.040 19:02:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:27.299 19:02:33 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:27.299 19:02:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:27.560 19:02:33 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:27.560 19:02:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:27.560 [2024-07-12 19:02:33.573646] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:27.560 19:02:33 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:06:27.560 19:02:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.560 19:02:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 19:02:33 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:06:27.560 19:02:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.560 19:02:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.560 19:02:33 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:06:27.560 19:02:33 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.560 19:02:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:27.821 MallocBdevForConfigChangeCheck 00:06:27.821 19:02:33 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:06:27.821 19:02:33 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:27.821 19:02:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:27.821 19:02:33 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:06:27.821 19:02:33 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.083 19:02:34 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:06:28.083 INFO: shutting down applications... 00:06:28.083 19:02:34 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:06:28.083 19:02:34 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:06:28.083 19:02:34 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:06:28.083 19:02:34 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:28.655 Calling clear_iscsi_subsystem 00:06:28.655 Calling clear_nvmf_subsystem 00:06:28.655 Calling clear_nbd_subsystem 00:06:28.655 Calling clear_ublk_subsystem 00:06:28.655 Calling clear_vhost_blk_subsystem 00:06:28.655 Calling clear_vhost_scsi_subsystem 00:06:28.655 Calling clear_bdev_subsystem 00:06:28.655 19:02:34 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:28.655 19:02:34 json_config -- json_config/json_config.sh@343 -- # count=100 00:06:28.655 19:02:34 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:06:28.655 19:02:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:28.655 19:02:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:28.655 19:02:34 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:28.917 19:02:34 json_config -- json_config/json_config.sh@345 -- # break 00:06:28.917 19:02:34 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:06:28.917 19:02:34 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:06:28.917 19:02:34 json_config -- json_config/common.sh@31 -- # local app=target 00:06:28.917 19:02:34 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:28.917 19:02:34 json_config -- json_config/common.sh@35 -- # [[ -n 1215024 ]] 00:06:28.917 19:02:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1215024 00:06:28.917 19:02:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:28.917 19:02:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:28.917 19:02:34 json_config -- json_config/common.sh@41 -- # kill -0 1215024 00:06:28.917 19:02:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:29.489 19:02:35 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:29.489 19:02:35 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:29.489 19:02:35 json_config -- json_config/common.sh@41 -- # kill -0 1215024 00:06:29.489 19:02:35 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:29.489 19:02:35 json_config -- json_config/common.sh@43 -- # break 00:06:29.489 19:02:35 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:29.489 19:02:35 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:29.489 SPDK target shutdown done 00:06:29.489 19:02:35 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:06:29.489 INFO: relaunching applications... 00:06:29.489 19:02:35 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.489 19:02:35 json_config -- json_config/common.sh@9 -- # local app=target 00:06:29.489 19:02:35 json_config -- json_config/common.sh@10 -- # shift 00:06:29.489 19:02:35 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:29.489 19:02:35 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:29.489 19:02:35 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:29.489 19:02:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.489 19:02:35 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:29.489 19:02:35 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1215977 00:06:29.489 19:02:35 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:29.489 Waiting for target to run... 00:06:29.489 19:02:35 json_config -- json_config/common.sh@25 -- # waitforlisten 1215977 /var/tmp/spdk_tgt.sock 00:06:29.489 19:02:35 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:29.489 19:02:35 json_config -- common/autotest_common.sh@829 -- # '[' -z 1215977 ']' 00:06:29.489 19:02:35 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:29.489 19:02:35 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.489 19:02:35 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:29.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:29.489 19:02:35 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.489 19:02:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:29.489 [2024-07-12 19:02:35.478133] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:29.489 [2024-07-12 19:02:35.478193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1215977 ] 00:06:29.489 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.750 [2024-07-12 19:02:35.756398] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.750 [2024-07-12 19:02:35.808566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.323 [2024-07-12 19:02:36.305155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:30.323 [2024-07-12 19:02:36.337511] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:30.323 19:02:36 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.323 19:02:36 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:30.323 19:02:36 json_config -- json_config/common.sh@26 -- # echo '' 00:06:30.323 00:06:30.323 19:02:36 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:30.323 19:02:36 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:30.323 INFO: Checking if target configuration is the same... 00:06:30.323 19:02:36 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.323 19:02:36 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:30.323 19:02:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.323 + '[' 2 -ne 2 ']' 00:06:30.323 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:30.323 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:30.323 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.323 +++ basename /dev/fd/62 00:06:30.323 ++ mktemp /tmp/62.XXX 00:06:30.323 + tmp_file_1=/tmp/62.8YL 00:06:30.323 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.323 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.323 + tmp_file_2=/tmp/spdk_tgt_config.json.dAb 00:06:30.323 + ret=0 00:06:30.323 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.583 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:30.844 + diff -u /tmp/62.8YL /tmp/spdk_tgt_config.json.dAb 00:06:30.844 + echo 'INFO: JSON config files are the same' 00:06:30.844 INFO: JSON config files are the same 00:06:30.844 + rm /tmp/62.8YL /tmp/spdk_tgt_config.json.dAb 00:06:30.844 + exit 0 00:06:30.844 19:02:36 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:30.844 19:02:36 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:30.844 INFO: changing configuration and checking if this can be detected... 00:06:30.844 19:02:36 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.844 19:02:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:30.844 19:02:36 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:30.844 19:02:36 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.844 19:02:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:30.845 + '[' 2 -ne 2 ']' 00:06:30.845 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:30.845 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:30.845 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:30.845 +++ basename /dev/fd/62 00:06:30.845 ++ mktemp /tmp/62.XXX 00:06:30.845 + tmp_file_1=/tmp/62.tXk 00:06:30.845 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:30.845 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:30.845 + tmp_file_2=/tmp/spdk_tgt_config.json.lst 00:06:30.845 + ret=0 00:06:30.845 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.105 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:31.105 + diff -u /tmp/62.tXk /tmp/spdk_tgt_config.json.lst 00:06:31.366 + ret=1 00:06:31.366 + echo '=== Start of file: /tmp/62.tXk ===' 00:06:31.366 + cat /tmp/62.tXk 00:06:31.366 + echo '=== End of file: /tmp/62.tXk ===' 00:06:31.366 + echo '' 00:06:31.366 + echo '=== Start of file: /tmp/spdk_tgt_config.json.lst ===' 00:06:31.366 + cat /tmp/spdk_tgt_config.json.lst 00:06:31.366 + echo '=== End of file: /tmp/spdk_tgt_config.json.lst ===' 00:06:31.366 + echo '' 00:06:31.366 + rm /tmp/62.tXk /tmp/spdk_tgt_config.json.lst 00:06:31.366 + exit 1 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:31.366 INFO: configuration change detected. 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@317 -- # [[ -n 1215977 ]] 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.366 19:02:37 json_config -- json_config/json_config.sh@323 -- # killprocess 1215977 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@948 -- # '[' -z 1215977 ']' 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@952 -- # kill -0 1215977 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@953 -- # uname 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1215977 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1215977' 00:06:31.366 killing process with pid 1215977 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@967 -- # kill 1215977 00:06:31.366 19:02:37 json_config -- common/autotest_common.sh@972 -- # wait 1215977 00:06:31.627 19:02:37 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:31.627 19:02:37 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:31.627 19:02:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:31.627 19:02:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.627 19:02:37 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:31.627 19:02:37 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:31.627 INFO: Success 00:06:31.627 00:06:31.627 real 0m6.993s 00:06:31.627 user 0m8.354s 00:06:31.627 sys 0m1.827s 00:06:31.627 19:02:37 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.627 19:02:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:31.627 ************************************ 00:06:31.627 END TEST json_config 00:06:31.627 ************************************ 00:06:31.627 19:02:37 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.627 19:02:37 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:31.627 19:02:37 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.627 19:02:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.627 19:02:37 -- common/autotest_common.sh@10 -- # set +x 00:06:31.889 ************************************ 00:06:31.889 START TEST json_config_extra_key 00:06:31.889 ************************************ 00:06:31.889 19:02:37 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:31.889 19:02:37 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:31.889 19:02:37 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:31.889 19:02:37 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:31.889 19:02:37 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.889 19:02:37 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.889 19:02:37 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.889 19:02:37 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:31.889 19:02:37 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:31.889 19:02:37 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:31.889 INFO: launching applications... 00:06:31.889 19:02:37 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1216748 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:31.889 Waiting for target to run... 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1216748 /var/tmp/spdk_tgt.sock 00:06:31.889 19:02:37 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1216748 ']' 00:06:31.889 19:02:37 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:31.889 19:02:37 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:31.889 19:02:37 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.889 19:02:37 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:31.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:31.889 19:02:37 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.889 19:02:37 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:31.889 [2024-07-12 19:02:37.932772] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:31.889 [2024-07-12 19:02:37.932846] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1216748 ] 00:06:31.889 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.150 [2024-07-12 19:02:38.178059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.150 [2024-07-12 19:02:38.228052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.720 19:02:38 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.720 19:02:38 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:32.720 00:06:32.720 19:02:38 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:32.720 INFO: shutting down applications... 00:06:32.720 19:02:38 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1216748 ]] 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1216748 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1216748 00:06:32.720 19:02:38 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:33.290 19:02:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:33.291 19:02:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:33.291 19:02:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1216748 00:06:33.291 19:02:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:33.291 19:02:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:33.291 19:02:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:33.291 19:02:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:33.291 SPDK target shutdown done 00:06:33.291 19:02:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:33.291 Success 00:06:33.291 00:06:33.291 real 0m1.436s 00:06:33.291 user 0m1.103s 00:06:33.291 sys 0m0.357s 00:06:33.291 19:02:39 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.291 19:02:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:33.291 ************************************ 00:06:33.291 END TEST json_config_extra_key 00:06:33.291 ************************************ 00:06:33.291 19:02:39 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.291 19:02:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:33.291 19:02:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.291 19:02:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.291 19:02:39 -- common/autotest_common.sh@10 -- # set +x 00:06:33.291 ************************************ 00:06:33.291 START TEST alias_rpc 00:06:33.291 ************************************ 00:06:33.291 19:02:39 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:33.291 * Looking for test storage... 00:06:33.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:33.291 19:02:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.291 19:02:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1217121 00:06:33.291 19:02:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1217121 00:06:33.291 19:02:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:33.291 19:02:39 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1217121 ']' 00:06:33.291 19:02:39 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.291 19:02:39 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.291 19:02:39 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.291 19:02:39 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.291 19:02:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.552 [2024-07-12 19:02:39.433432] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:33.552 [2024-07-12 19:02:39.433486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217121 ] 00:06:33.552 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.552 [2024-07-12 19:02:39.492670] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.552 [2024-07-12 19:02:39.556820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.123 19:02:40 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.123 19:02:40 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.123 19:02:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:34.384 19:02:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1217121 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1217121 ']' 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1217121 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1217121 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1217121' 00:06:34.384 killing process with pid 1217121 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@967 -- # kill 1217121 00:06:34.384 19:02:40 alias_rpc -- common/autotest_common.sh@972 -- # wait 1217121 00:06:34.646 00:06:34.646 real 0m1.379s 00:06:34.646 user 0m1.522s 00:06:34.646 sys 0m0.371s 00:06:34.646 19:02:40 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.646 19:02:40 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.646 ************************************ 00:06:34.646 END TEST alias_rpc 00:06:34.646 ************************************ 00:06:34.646 19:02:40 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.646 19:02:40 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:34.646 19:02:40 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:34.646 19:02:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.646 19:02:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.646 19:02:40 -- common/autotest_common.sh@10 -- # set +x 00:06:34.646 ************************************ 00:06:34.646 START TEST spdkcli_tcp 00:06:34.646 ************************************ 00:06:34.646 19:02:40 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:34.960 * Looking for test storage... 00:06:34.960 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:34.960 19:02:40 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:34.960 19:02:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1217390 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1217390 00:06:34.960 19:02:40 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:34.960 19:02:40 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1217390 ']' 00:06:34.960 19:02:40 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.960 19:02:40 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.960 19:02:40 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.960 19:02:40 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.960 19:02:40 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:34.960 [2024-07-12 19:02:40.893939] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:34.960 [2024-07-12 19:02:40.894015] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217390 ] 00:06:34.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.960 [2024-07-12 19:02:40.960864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:34.960 [2024-07-12 19:02:41.037073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.960 [2024-07-12 19:02:41.037076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.547 19:02:41 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.547 19:02:41 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:35.547 19:02:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1217538 00:06:35.547 19:02:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:35.547 19:02:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:35.807 [ 00:06:35.807 "bdev_malloc_delete", 00:06:35.807 "bdev_malloc_create", 00:06:35.807 "bdev_null_resize", 00:06:35.807 "bdev_null_delete", 00:06:35.807 "bdev_null_create", 00:06:35.807 "bdev_nvme_cuse_unregister", 00:06:35.807 "bdev_nvme_cuse_register", 00:06:35.807 "bdev_opal_new_user", 00:06:35.807 "bdev_opal_set_lock_state", 00:06:35.807 "bdev_opal_delete", 00:06:35.807 "bdev_opal_get_info", 00:06:35.807 "bdev_opal_create", 00:06:35.807 "bdev_nvme_opal_revert", 00:06:35.807 "bdev_nvme_opal_init", 00:06:35.807 "bdev_nvme_send_cmd", 00:06:35.807 "bdev_nvme_get_path_iostat", 00:06:35.807 "bdev_nvme_get_mdns_discovery_info", 00:06:35.807 "bdev_nvme_stop_mdns_discovery", 00:06:35.807 "bdev_nvme_start_mdns_discovery", 00:06:35.807 "bdev_nvme_set_multipath_policy", 00:06:35.807 "bdev_nvme_set_preferred_path", 00:06:35.807 "bdev_nvme_get_io_paths", 00:06:35.807 "bdev_nvme_remove_error_injection", 00:06:35.807 "bdev_nvme_add_error_injection", 00:06:35.807 "bdev_nvme_get_discovery_info", 00:06:35.807 "bdev_nvme_stop_discovery", 00:06:35.807 "bdev_nvme_start_discovery", 00:06:35.807 "bdev_nvme_get_controller_health_info", 00:06:35.807 "bdev_nvme_disable_controller", 00:06:35.807 "bdev_nvme_enable_controller", 00:06:35.807 "bdev_nvme_reset_controller", 00:06:35.807 "bdev_nvme_get_transport_statistics", 00:06:35.807 "bdev_nvme_apply_firmware", 00:06:35.807 "bdev_nvme_detach_controller", 00:06:35.807 "bdev_nvme_get_controllers", 00:06:35.807 "bdev_nvme_attach_controller", 00:06:35.807 "bdev_nvme_set_hotplug", 00:06:35.807 "bdev_nvme_set_options", 00:06:35.807 "bdev_passthru_delete", 00:06:35.807 "bdev_passthru_create", 00:06:35.807 "bdev_lvol_set_parent_bdev", 00:06:35.807 "bdev_lvol_set_parent", 00:06:35.807 "bdev_lvol_check_shallow_copy", 00:06:35.807 "bdev_lvol_start_shallow_copy", 00:06:35.807 "bdev_lvol_grow_lvstore", 00:06:35.807 "bdev_lvol_get_lvols", 00:06:35.807 "bdev_lvol_get_lvstores", 00:06:35.807 "bdev_lvol_delete", 00:06:35.807 "bdev_lvol_set_read_only", 00:06:35.807 "bdev_lvol_resize", 00:06:35.807 "bdev_lvol_decouple_parent", 00:06:35.807 "bdev_lvol_inflate", 00:06:35.807 "bdev_lvol_rename", 00:06:35.807 "bdev_lvol_clone_bdev", 00:06:35.807 "bdev_lvol_clone", 00:06:35.807 "bdev_lvol_snapshot", 00:06:35.807 "bdev_lvol_create", 00:06:35.807 "bdev_lvol_delete_lvstore", 00:06:35.807 "bdev_lvol_rename_lvstore", 00:06:35.807 "bdev_lvol_create_lvstore", 00:06:35.807 "bdev_raid_set_options", 00:06:35.807 "bdev_raid_remove_base_bdev", 00:06:35.807 "bdev_raid_add_base_bdev", 00:06:35.807 "bdev_raid_delete", 00:06:35.807 "bdev_raid_create", 00:06:35.807 "bdev_raid_get_bdevs", 00:06:35.807 "bdev_error_inject_error", 00:06:35.807 "bdev_error_delete", 00:06:35.807 "bdev_error_create", 00:06:35.807 "bdev_split_delete", 00:06:35.807 "bdev_split_create", 00:06:35.807 "bdev_delay_delete", 00:06:35.807 "bdev_delay_create", 00:06:35.807 "bdev_delay_update_latency", 00:06:35.807 "bdev_zone_block_delete", 00:06:35.807 "bdev_zone_block_create", 00:06:35.807 "blobfs_create", 00:06:35.807 "blobfs_detect", 00:06:35.807 "blobfs_set_cache_size", 00:06:35.807 "bdev_aio_delete", 00:06:35.807 "bdev_aio_rescan", 00:06:35.807 "bdev_aio_create", 00:06:35.807 "bdev_ftl_set_property", 00:06:35.807 "bdev_ftl_get_properties", 00:06:35.807 "bdev_ftl_get_stats", 00:06:35.807 "bdev_ftl_unmap", 00:06:35.807 "bdev_ftl_unload", 00:06:35.807 "bdev_ftl_delete", 00:06:35.807 "bdev_ftl_load", 00:06:35.807 "bdev_ftl_create", 00:06:35.807 "bdev_virtio_attach_controller", 00:06:35.807 "bdev_virtio_scsi_get_devices", 00:06:35.807 "bdev_virtio_detach_controller", 00:06:35.807 "bdev_virtio_blk_set_hotplug", 00:06:35.807 "bdev_iscsi_delete", 00:06:35.807 "bdev_iscsi_create", 00:06:35.807 "bdev_iscsi_set_options", 00:06:35.807 "accel_error_inject_error", 00:06:35.807 "ioat_scan_accel_module", 00:06:35.807 "dsa_scan_accel_module", 00:06:35.807 "iaa_scan_accel_module", 00:06:35.807 "vfu_virtio_create_scsi_endpoint", 00:06:35.807 "vfu_virtio_scsi_remove_target", 00:06:35.807 "vfu_virtio_scsi_add_target", 00:06:35.807 "vfu_virtio_create_blk_endpoint", 00:06:35.808 "vfu_virtio_delete_endpoint", 00:06:35.808 "keyring_file_remove_key", 00:06:35.808 "keyring_file_add_key", 00:06:35.808 "keyring_linux_set_options", 00:06:35.808 "iscsi_get_histogram", 00:06:35.808 "iscsi_enable_histogram", 00:06:35.808 "iscsi_set_options", 00:06:35.808 "iscsi_get_auth_groups", 00:06:35.808 "iscsi_auth_group_remove_secret", 00:06:35.808 "iscsi_auth_group_add_secret", 00:06:35.808 "iscsi_delete_auth_group", 00:06:35.808 "iscsi_create_auth_group", 00:06:35.808 "iscsi_set_discovery_auth", 00:06:35.808 "iscsi_get_options", 00:06:35.808 "iscsi_target_node_request_logout", 00:06:35.808 "iscsi_target_node_set_redirect", 00:06:35.808 "iscsi_target_node_set_auth", 00:06:35.808 "iscsi_target_node_add_lun", 00:06:35.808 "iscsi_get_stats", 00:06:35.808 "iscsi_get_connections", 00:06:35.808 "iscsi_portal_group_set_auth", 00:06:35.808 "iscsi_start_portal_group", 00:06:35.808 "iscsi_delete_portal_group", 00:06:35.808 "iscsi_create_portal_group", 00:06:35.808 "iscsi_get_portal_groups", 00:06:35.808 "iscsi_delete_target_node", 00:06:35.808 "iscsi_target_node_remove_pg_ig_maps", 00:06:35.808 "iscsi_target_node_add_pg_ig_maps", 00:06:35.808 "iscsi_create_target_node", 00:06:35.808 "iscsi_get_target_nodes", 00:06:35.808 "iscsi_delete_initiator_group", 00:06:35.808 "iscsi_initiator_group_remove_initiators", 00:06:35.808 "iscsi_initiator_group_add_initiators", 00:06:35.808 "iscsi_create_initiator_group", 00:06:35.808 "iscsi_get_initiator_groups", 00:06:35.808 "nvmf_set_crdt", 00:06:35.808 "nvmf_set_config", 00:06:35.808 "nvmf_set_max_subsystems", 00:06:35.808 "nvmf_stop_mdns_prr", 00:06:35.808 "nvmf_publish_mdns_prr", 00:06:35.808 "nvmf_subsystem_get_listeners", 00:06:35.808 "nvmf_subsystem_get_qpairs", 00:06:35.808 "nvmf_subsystem_get_controllers", 00:06:35.808 "nvmf_get_stats", 00:06:35.808 "nvmf_get_transports", 00:06:35.808 "nvmf_create_transport", 00:06:35.808 "nvmf_get_targets", 00:06:35.808 "nvmf_delete_target", 00:06:35.808 "nvmf_create_target", 00:06:35.808 "nvmf_subsystem_allow_any_host", 00:06:35.808 "nvmf_subsystem_remove_host", 00:06:35.808 "nvmf_subsystem_add_host", 00:06:35.808 "nvmf_ns_remove_host", 00:06:35.808 "nvmf_ns_add_host", 00:06:35.808 "nvmf_subsystem_remove_ns", 00:06:35.808 "nvmf_subsystem_add_ns", 00:06:35.808 "nvmf_subsystem_listener_set_ana_state", 00:06:35.808 "nvmf_discovery_get_referrals", 00:06:35.808 "nvmf_discovery_remove_referral", 00:06:35.808 "nvmf_discovery_add_referral", 00:06:35.808 "nvmf_subsystem_remove_listener", 00:06:35.808 "nvmf_subsystem_add_listener", 00:06:35.808 "nvmf_delete_subsystem", 00:06:35.808 "nvmf_create_subsystem", 00:06:35.808 "nvmf_get_subsystems", 00:06:35.808 "env_dpdk_get_mem_stats", 00:06:35.808 "nbd_get_disks", 00:06:35.808 "nbd_stop_disk", 00:06:35.808 "nbd_start_disk", 00:06:35.808 "ublk_recover_disk", 00:06:35.808 "ublk_get_disks", 00:06:35.808 "ublk_stop_disk", 00:06:35.808 "ublk_start_disk", 00:06:35.808 "ublk_destroy_target", 00:06:35.808 "ublk_create_target", 00:06:35.808 "virtio_blk_create_transport", 00:06:35.808 "virtio_blk_get_transports", 00:06:35.808 "vhost_controller_set_coalescing", 00:06:35.808 "vhost_get_controllers", 00:06:35.808 "vhost_delete_controller", 00:06:35.808 "vhost_create_blk_controller", 00:06:35.808 "vhost_scsi_controller_remove_target", 00:06:35.808 "vhost_scsi_controller_add_target", 00:06:35.808 "vhost_start_scsi_controller", 00:06:35.808 "vhost_create_scsi_controller", 00:06:35.808 "thread_set_cpumask", 00:06:35.808 "framework_get_governor", 00:06:35.808 "framework_get_scheduler", 00:06:35.808 "framework_set_scheduler", 00:06:35.808 "framework_get_reactors", 00:06:35.808 "thread_get_io_channels", 00:06:35.808 "thread_get_pollers", 00:06:35.808 "thread_get_stats", 00:06:35.808 "framework_monitor_context_switch", 00:06:35.808 "spdk_kill_instance", 00:06:35.808 "log_enable_timestamps", 00:06:35.808 "log_get_flags", 00:06:35.808 "log_clear_flag", 00:06:35.808 "log_set_flag", 00:06:35.808 "log_get_level", 00:06:35.808 "log_set_level", 00:06:35.808 "log_get_print_level", 00:06:35.808 "log_set_print_level", 00:06:35.808 "framework_enable_cpumask_locks", 00:06:35.808 "framework_disable_cpumask_locks", 00:06:35.808 "framework_wait_init", 00:06:35.808 "framework_start_init", 00:06:35.808 "scsi_get_devices", 00:06:35.808 "bdev_get_histogram", 00:06:35.808 "bdev_enable_histogram", 00:06:35.808 "bdev_set_qos_limit", 00:06:35.808 "bdev_set_qd_sampling_period", 00:06:35.808 "bdev_get_bdevs", 00:06:35.808 "bdev_reset_iostat", 00:06:35.808 "bdev_get_iostat", 00:06:35.808 "bdev_examine", 00:06:35.808 "bdev_wait_for_examine", 00:06:35.808 "bdev_set_options", 00:06:35.808 "notify_get_notifications", 00:06:35.808 "notify_get_types", 00:06:35.808 "accel_get_stats", 00:06:35.808 "accel_set_options", 00:06:35.808 "accel_set_driver", 00:06:35.808 "accel_crypto_key_destroy", 00:06:35.808 "accel_crypto_keys_get", 00:06:35.808 "accel_crypto_key_create", 00:06:35.808 "accel_assign_opc", 00:06:35.808 "accel_get_module_info", 00:06:35.808 "accel_get_opc_assignments", 00:06:35.808 "vmd_rescan", 00:06:35.808 "vmd_remove_device", 00:06:35.808 "vmd_enable", 00:06:35.808 "sock_get_default_impl", 00:06:35.808 "sock_set_default_impl", 00:06:35.808 "sock_impl_set_options", 00:06:35.808 "sock_impl_get_options", 00:06:35.808 "iobuf_get_stats", 00:06:35.808 "iobuf_set_options", 00:06:35.808 "keyring_get_keys", 00:06:35.808 "framework_get_pci_devices", 00:06:35.808 "framework_get_config", 00:06:35.808 "framework_get_subsystems", 00:06:35.808 "vfu_tgt_set_base_path", 00:06:35.808 "trace_get_info", 00:06:35.808 "trace_get_tpoint_group_mask", 00:06:35.808 "trace_disable_tpoint_group", 00:06:35.808 "trace_enable_tpoint_group", 00:06:35.808 "trace_clear_tpoint_mask", 00:06:35.808 "trace_set_tpoint_mask", 00:06:35.808 "spdk_get_version", 00:06:35.808 "rpc_get_methods" 00:06:35.808 ] 00:06:35.808 19:02:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:35.808 19:02:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:35.808 19:02:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1217390 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1217390 ']' 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1217390 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1217390 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1217390' 00:06:35.808 killing process with pid 1217390 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1217390 00:06:35.808 19:02:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1217390 00:06:36.069 00:06:36.069 real 0m1.407s 00:06:36.069 user 0m2.575s 00:06:36.069 sys 0m0.426s 00:06:36.069 19:02:42 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.069 19:02:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.069 ************************************ 00:06:36.069 END TEST spdkcli_tcp 00:06:36.069 ************************************ 00:06:36.069 19:02:42 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.069 19:02:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.069 19:02:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.069 19:02:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.069 19:02:42 -- common/autotest_common.sh@10 -- # set +x 00:06:36.329 ************************************ 00:06:36.329 START TEST dpdk_mem_utility 00:06:36.329 ************************************ 00:06:36.329 19:02:42 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:36.329 * Looking for test storage... 00:06:36.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:36.329 19:02:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:36.329 19:02:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1217675 00:06:36.330 19:02:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1217675 00:06:36.330 19:02:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:36.330 19:02:42 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1217675 ']' 00:06:36.330 19:02:42 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.330 19:02:42 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.330 19:02:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.330 19:02:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.330 19:02:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:36.330 [2024-07-12 19:02:42.374668] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:36.330 [2024-07-12 19:02:42.374746] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217675 ] 00:06:36.330 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.330 [2024-07-12 19:02:42.438452] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.591 [2024-07-12 19:02:42.513707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.161 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.161 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:37.161 19:02:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:37.161 19:02:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:37.161 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.161 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.161 { 00:06:37.161 "filename": "/tmp/spdk_mem_dump.txt" 00:06:37.161 } 00:06:37.161 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.161 19:02:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:37.161 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:37.161 1 heaps totaling size 814.000000 MiB 00:06:37.161 size: 814.000000 MiB heap id: 0 00:06:37.161 end heaps---------- 00:06:37.161 8 mempools totaling size 598.116089 MiB 00:06:37.161 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:37.161 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:37.161 size: 84.521057 MiB name: bdev_io_1217675 00:06:37.161 size: 51.011292 MiB name: evtpool_1217675 00:06:37.161 size: 50.003479 MiB name: msgpool_1217675 00:06:37.161 size: 21.763794 MiB name: PDU_Pool 00:06:37.161 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:37.161 size: 0.026123 MiB name: Session_Pool 00:06:37.161 end mempools------- 00:06:37.161 6 memzones totaling size 4.142822 MiB 00:06:37.161 size: 1.000366 MiB name: RG_ring_0_1217675 00:06:37.161 size: 1.000366 MiB name: RG_ring_1_1217675 00:06:37.161 size: 1.000366 MiB name: RG_ring_4_1217675 00:06:37.161 size: 1.000366 MiB name: RG_ring_5_1217675 00:06:37.161 size: 0.125366 MiB name: RG_ring_2_1217675 00:06:37.161 size: 0.015991 MiB name: RG_ring_3_1217675 00:06:37.161 end memzones------- 00:06:37.161 19:02:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:37.161 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:37.161 list of free elements. size: 12.519348 MiB 00:06:37.161 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:37.161 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:37.161 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:37.161 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:37.161 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:37.161 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:37.161 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:37.161 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:37.161 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:37.161 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:37.161 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:37.161 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:37.161 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:37.161 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:37.162 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:37.162 list of standard malloc elements. size: 199.218079 MiB 00:06:37.162 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:37.162 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:37.162 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:37.162 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:37.162 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:37.162 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:37.162 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:37.162 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:37.162 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:37.162 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:37.162 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:37.162 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:37.162 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:37.162 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:37.162 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:37.162 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:37.162 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:37.162 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:37.162 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:37.162 list of memzone associated elements. size: 602.262573 MiB 00:06:37.162 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:37.162 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:37.162 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:37.162 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:37.162 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:37.162 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1217675_0 00:06:37.162 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:37.162 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1217675_0 00:06:37.162 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:37.162 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1217675_0 00:06:37.162 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:37.162 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:37.162 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:37.162 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:37.162 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:37.162 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1217675 00:06:37.162 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:37.162 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1217675 00:06:37.162 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:37.162 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1217675 00:06:37.162 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:37.162 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:37.162 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:37.162 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:37.162 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:37.162 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:37.162 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:37.162 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:37.162 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:37.162 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1217675 00:06:37.162 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:37.162 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1217675 00:06:37.162 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:37.162 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1217675 00:06:37.162 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:37.162 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1217675 00:06:37.162 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:37.162 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1217675 00:06:37.162 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:37.162 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:37.162 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:37.162 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:37.162 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:37.162 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:37.162 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:37.162 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1217675 00:06:37.162 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:37.162 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:37.162 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:37.162 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:37.162 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:37.162 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1217675 00:06:37.162 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:37.162 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:37.162 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:37.162 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1217675 00:06:37.162 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:37.162 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1217675 00:06:37.162 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:37.162 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:37.162 19:02:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:37.162 19:02:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1217675 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1217675 ']' 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1217675 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1217675 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1217675' 00:06:37.162 killing process with pid 1217675 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1217675 00:06:37.162 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1217675 00:06:37.422 00:06:37.422 real 0m1.279s 00:06:37.422 user 0m1.331s 00:06:37.422 sys 0m0.379s 00:06:37.422 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.422 19:02:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:37.422 ************************************ 00:06:37.422 END TEST dpdk_mem_utility 00:06:37.422 ************************************ 00:06:37.422 19:02:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:37.422 19:02:43 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:37.422 19:02:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.422 19:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.422 19:02:43 -- common/autotest_common.sh@10 -- # set +x 00:06:37.683 ************************************ 00:06:37.683 START TEST event 00:06:37.683 ************************************ 00:06:37.683 19:02:43 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:37.683 * Looking for test storage... 00:06:37.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:37.683 19:02:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:37.683 19:02:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:37.683 19:02:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.683 19:02:43 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:37.683 19:02:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.683 19:02:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.683 ************************************ 00:06:37.683 START TEST event_perf 00:06:37.683 ************************************ 00:06:37.683 19:02:43 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:37.683 Running I/O for 1 seconds...[2024-07-12 19:02:43.725344] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:37.683 [2024-07-12 19:02:43.725442] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218001 ] 00:06:37.683 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.683 [2024-07-12 19:02:43.795500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:37.942 [2024-07-12 19:02:43.873460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.942 [2024-07-12 19:02:43.873568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.942 [2024-07-12 19:02:43.873725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.942 Running I/O for 1 seconds...[2024-07-12 19:02:43.873725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.880 00:06:38.880 lcore 0: 172551 00:06:38.880 lcore 1: 172551 00:06:38.880 lcore 2: 172547 00:06:38.880 lcore 3: 172549 00:06:38.880 done. 00:06:38.880 00:06:38.880 real 0m1.224s 00:06:38.880 user 0m4.144s 00:06:38.880 sys 0m0.077s 00:06:38.880 19:02:44 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.880 19:02:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.880 ************************************ 00:06:38.880 END TEST event_perf 00:06:38.880 ************************************ 00:06:38.880 19:02:44 event -- common/autotest_common.sh@1142 -- # return 0 00:06:38.880 19:02:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:38.880 19:02:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:38.880 19:02:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.880 19:02:44 event -- common/autotest_common.sh@10 -- # set +x 00:06:38.880 ************************************ 00:06:38.880 START TEST event_reactor 00:06:38.880 ************************************ 00:06:38.880 19:02:45 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:39.140 [2024-07-12 19:02:45.023859] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:39.140 [2024-07-12 19:02:45.023950] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218353 ] 00:06:39.140 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.140 [2024-07-12 19:02:45.086606] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.140 [2024-07-12 19:02:45.149149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.079 test_start 00:06:40.079 oneshot 00:06:40.079 tick 100 00:06:40.079 tick 100 00:06:40.079 tick 250 00:06:40.079 tick 100 00:06:40.079 tick 100 00:06:40.079 tick 250 00:06:40.079 tick 100 00:06:40.079 tick 500 00:06:40.079 tick 100 00:06:40.079 tick 100 00:06:40.079 tick 250 00:06:40.079 tick 100 00:06:40.079 tick 100 00:06:40.079 test_end 00:06:40.079 00:06:40.079 real 0m1.198s 00:06:40.079 user 0m1.132s 00:06:40.079 sys 0m0.062s 00:06:40.079 19:02:46 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.079 19:02:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:40.079 ************************************ 00:06:40.079 END TEST event_reactor 00:06:40.079 ************************************ 00:06:40.339 19:02:46 event -- common/autotest_common.sh@1142 -- # return 0 00:06:40.339 19:02:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:40.339 19:02:46 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:40.339 19:02:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.339 19:02:46 event -- common/autotest_common.sh@10 -- # set +x 00:06:40.339 ************************************ 00:06:40.339 START TEST event_reactor_perf 00:06:40.339 ************************************ 00:06:40.339 19:02:46 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:40.339 [2024-07-12 19:02:46.302091] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:40.339 [2024-07-12 19:02:46.302195] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218712 ] 00:06:40.339 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.339 [2024-07-12 19:02:46.365364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.339 [2024-07-12 19:02:46.433619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.720 test_start 00:06:41.720 test_end 00:06:41.720 Performance: 370295 events per second 00:06:41.720 00:06:41.720 real 0m1.206s 00:06:41.720 user 0m1.131s 00:06:41.720 sys 0m0.072s 00:06:41.720 19:02:47 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.720 19:02:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:41.720 ************************************ 00:06:41.720 END TEST event_reactor_perf 00:06:41.720 ************************************ 00:06:41.720 19:02:47 event -- common/autotest_common.sh@1142 -- # return 0 00:06:41.720 19:02:47 event -- event/event.sh@49 -- # uname -s 00:06:41.720 19:02:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:41.721 19:02:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:41.721 19:02:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.721 19:02:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.721 19:02:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:41.721 ************************************ 00:06:41.721 START TEST event_scheduler 00:06:41.721 ************************************ 00:06:41.721 19:02:47 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:41.721 * Looking for test storage... 00:06:41.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:41.721 19:02:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:41.721 19:02:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1218987 00:06:41.721 19:02:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:41.721 19:02:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:41.721 19:02:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1218987 00:06:41.721 19:02:47 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1218987 ']' 00:06:41.721 19:02:47 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.721 19:02:47 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.721 19:02:47 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.721 19:02:47 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.721 19:02:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:41.721 [2024-07-12 19:02:47.723876] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:41.721 [2024-07-12 19:02:47.723948] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218987 ] 00:06:41.721 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.721 [2024-07-12 19:02:47.778935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:41.721 [2024-07-12 19:02:47.846549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.721 [2024-07-12 19:02:47.846705] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.721 [2024-07-12 19:02:47.846864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:41.721 [2024-07-12 19:02:47.846866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:42.661 19:02:48 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 [2024-07-12 19:02:48.512926] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:42.661 [2024-07-12 19:02:48.512941] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:42.661 [2024-07-12 19:02:48.512948] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:42.661 [2024-07-12 19:02:48.512952] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:42.661 [2024-07-12 19:02:48.512956] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 [2024-07-12 19:02:48.567329] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 ************************************ 00:06:42.661 START TEST scheduler_create_thread 00:06:42.661 ************************************ 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 2 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 3 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 4 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 5 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 6 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 7 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 8 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.661 9 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.661 19:02:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.232 10 00:06:43.232 19:02:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.232 19:02:49 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:43.232 19:02:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.232 19:02:49 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:44.614 19:02:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:44.614 19:02:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:44.614 19:02:50 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:44.614 19:02:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:44.614 19:02:50 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:45.185 19:02:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.185 19:02:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:45.185 19:02:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.185 19:02:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.126 19:02:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.126 19:02:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:46.126 19:02:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:46.126 19:02:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:46.126 19:02:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.698 19:02:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:46.698 00:06:46.698 real 0m4.224s 00:06:46.698 user 0m0.023s 00:06:46.698 sys 0m0.008s 00:06:46.698 19:02:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.957 19:02:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:46.957 ************************************ 00:06:46.957 END TEST scheduler_create_thread 00:06:46.957 ************************************ 00:06:46.957 19:02:52 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:46.957 19:02:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:46.957 19:02:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1218987 00:06:46.957 19:02:52 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1218987 ']' 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1218987 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1218987 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1218987' 00:06:46.958 killing process with pid 1218987 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1218987 00:06:46.958 19:02:52 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1218987 00:06:47.217 [2024-07-12 19:02:53.108594] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:47.217 00:06:47.217 real 0m5.711s 00:06:47.217 user 0m12.751s 00:06:47.217 sys 0m0.364s 00:06:47.217 19:02:53 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.217 19:02:53 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:47.217 ************************************ 00:06:47.217 END TEST event_scheduler 00:06:47.217 ************************************ 00:06:47.217 19:02:53 event -- common/autotest_common.sh@1142 -- # return 0 00:06:47.217 19:02:53 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:47.217 19:02:53 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:47.217 19:02:53 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.217 19:02:53 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.217 19:02:53 event -- common/autotest_common.sh@10 -- # set +x 00:06:47.477 ************************************ 00:06:47.477 START TEST app_repeat 00:06:47.477 ************************************ 00:06:47.477 19:02:53 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1220158 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1220158' 00:06:47.477 Process app_repeat pid: 1220158 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:47.477 spdk_app_start Round 0 00:06:47.477 19:02:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1220158 /var/tmp/spdk-nbd.sock 00:06:47.477 19:02:53 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1220158 ']' 00:06:47.477 19:02:53 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:47.477 19:02:53 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.477 19:02:53 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:47.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:47.477 19:02:53 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.477 19:02:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:47.477 [2024-07-12 19:02:53.399551] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:06:47.477 [2024-07-12 19:02:53.399621] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1220158 ] 00:06:47.477 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.477 [2024-07-12 19:02:53.460432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:47.477 [2024-07-12 19:02:53.527180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.477 [2024-07-12 19:02:53.527192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.048 19:02:54 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.048 19:02:54 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:48.048 19:02:54 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.309 Malloc0 00:06:48.309 19:02:54 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:48.569 Malloc1 00:06:48.569 19:02:54 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:48.569 /dev/nbd0 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:48.569 19:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:48.569 19:02:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:48.569 19:02:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:48.569 19:02:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.569 19:02:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.569 19:02:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.831 1+0 records in 00:06:48.831 1+0 records out 00:06:48.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216241 s, 18.9 MB/s 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:48.831 /dev/nbd1 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:48.831 1+0 records in 00:06:48.831 1+0 records out 00:06:48.831 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00021565 s, 19.0 MB/s 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:48.831 19:02:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.831 19:02:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:49.093 { 00:06:49.093 "nbd_device": "/dev/nbd0", 00:06:49.093 "bdev_name": "Malloc0" 00:06:49.093 }, 00:06:49.093 { 00:06:49.093 "nbd_device": "/dev/nbd1", 00:06:49.093 "bdev_name": "Malloc1" 00:06:49.093 } 00:06:49.093 ]' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:49.093 { 00:06:49.093 "nbd_device": "/dev/nbd0", 00:06:49.093 "bdev_name": "Malloc0" 00:06:49.093 }, 00:06:49.093 { 00:06:49.093 "nbd_device": "/dev/nbd1", 00:06:49.093 "bdev_name": "Malloc1" 00:06:49.093 } 00:06:49.093 ]' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:49.093 /dev/nbd1' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:49.093 /dev/nbd1' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:49.093 256+0 records in 00:06:49.093 256+0 records out 00:06:49.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0121871 s, 86.0 MB/s 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:49.093 256+0 records in 00:06:49.093 256+0 records out 00:06:49.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.017325 s, 60.5 MB/s 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:49.093 256+0 records in 00:06:49.093 256+0 records out 00:06:49.093 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0169314 s, 61.9 MB/s 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.093 19:02:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.354 19:02:55 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.615 19:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.876 19:02:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.876 19:02:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:49.876 19:02:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.138 [2024-07-12 19:02:56.055935] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.138 [2024-07-12 19:02:56.120243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.138 [2024-07-12 19:02:56.120367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.138 [2024-07-12 19:02:56.151609] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.138 [2024-07-12 19:02:56.151641] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:53.433 19:02:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:53.433 19:02:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:53.433 spdk_app_start Round 1 00:06:53.433 19:02:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1220158 /var/tmp/spdk-nbd.sock 00:06:53.433 19:02:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1220158 ']' 00:06:53.433 19:02:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:53.433 19:02:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.433 19:02:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:53.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:53.433 19:02:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.433 19:02:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:53.433 19:02:59 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.433 19:02:59 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:53.433 19:02:59 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.433 Malloc0 00:06:53.433 19:02:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.433 Malloc1 00:06:53.433 19:02:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.433 /dev/nbd0 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.433 19:02:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.433 19:02:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:53.433 19:02:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:53.433 19:02:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:53.433 19:02:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:53.433 19:02:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.694 1+0 records in 00:06:53.694 1+0 records out 00:06:53.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277891 s, 14.7 MB/s 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.694 /dev/nbd1 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.694 1+0 records in 00:06:53.694 1+0 records out 00:06:53.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363426 s, 11.3 MB/s 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:53.694 19:02:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.694 19:02:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:53.954 { 00:06:53.954 "nbd_device": "/dev/nbd0", 00:06:53.954 "bdev_name": "Malloc0" 00:06:53.954 }, 00:06:53.954 { 00:06:53.954 "nbd_device": "/dev/nbd1", 00:06:53.954 "bdev_name": "Malloc1" 00:06:53.954 } 00:06:53.954 ]' 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:53.954 { 00:06:53.954 "nbd_device": "/dev/nbd0", 00:06:53.954 "bdev_name": "Malloc0" 00:06:53.954 }, 00:06:53.954 { 00:06:53.954 "nbd_device": "/dev/nbd1", 00:06:53.954 "bdev_name": "Malloc1" 00:06:53.954 } 00:06:53.954 ]' 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:53.954 /dev/nbd1' 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:53.954 /dev/nbd1' 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:53.954 19:02:59 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:53.954 256+0 records in 00:06:53.955 256+0 records out 00:06:53.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0124446 s, 84.3 MB/s 00:06:53.955 19:02:59 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.955 19:02:59 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:53.955 256+0 records in 00:06:53.955 256+0 records out 00:06:53.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0230083 s, 45.6 MB/s 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:53.955 256+0 records in 00:06:53.955 256+0 records out 00:06:53.955 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0192352 s, 54.5 MB/s 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:53.955 19:03:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.215 19:03:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.475 19:03:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.735 19:03:00 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.735 19:03:00 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:54.735 19:03:00 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:54.995 [2024-07-12 19:03:00.920129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:54.995 [2024-07-12 19:03:00.983871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.995 [2024-07-12 19:03:00.983872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.995 [2024-07-12 19:03:01.016144] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:54.995 [2024-07-12 19:03:01.016186] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.293 19:03:03 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:58.294 19:03:03 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:58.294 spdk_app_start Round 2 00:06:58.294 19:03:03 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1220158 /var/tmp/spdk-nbd.sock 00:06:58.294 19:03:03 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1220158 ']' 00:06:58.294 19:03:03 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.294 19:03:03 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.294 19:03:03 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.294 19:03:03 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.294 19:03:03 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:58.294 19:03:03 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.294 19:03:03 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:58.294 19:03:03 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.294 Malloc0 00:06:58.294 19:03:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.294 Malloc1 00:06:58.294 19:03:04 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.294 /dev/nbd0 00:06:58.294 19:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.555 19:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.555 1+0 records in 00:06:58.555 1+0 records out 00:06:58.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280188 s, 14.6 MB/s 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.555 19:03:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.556 /dev/nbd1 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.556 1+0 records in 00:06:58.556 1+0 records out 00:06:58.556 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283087 s, 14.5 MB/s 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:58.556 19:03:04 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.556 19:03:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:58.817 { 00:06:58.817 "nbd_device": "/dev/nbd0", 00:06:58.817 "bdev_name": "Malloc0" 00:06:58.817 }, 00:06:58.817 { 00:06:58.817 "nbd_device": "/dev/nbd1", 00:06:58.817 "bdev_name": "Malloc1" 00:06:58.817 } 00:06:58.817 ]' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:58.817 { 00:06:58.817 "nbd_device": "/dev/nbd0", 00:06:58.817 "bdev_name": "Malloc0" 00:06:58.817 }, 00:06:58.817 { 00:06:58.817 "nbd_device": "/dev/nbd1", 00:06:58.817 "bdev_name": "Malloc1" 00:06:58.817 } 00:06:58.817 ]' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:58.817 /dev/nbd1' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:58.817 /dev/nbd1' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:58.817 256+0 records in 00:06:58.817 256+0 records out 00:06:58.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0117364 s, 89.3 MB/s 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:58.817 256+0 records in 00:06:58.817 256+0 records out 00:06:58.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167474 s, 62.6 MB/s 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:58.817 256+0 records in 00:06:58.817 256+0 records out 00:06:58.817 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.018317 s, 57.2 MB/s 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:58.817 19:03:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.078 19:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.079 19:03:05 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.339 19:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.600 19:03:05 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.600 19:03:05 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:59.600 19:03:05 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:59.861 [2024-07-12 19:03:05.781924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:59.861 [2024-07-12 19:03:05.845637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.861 [2024-07-12 19:03:05.845640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.861 [2024-07-12 19:03:05.877016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:59.861 [2024-07-12 19:03:05.877052] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.257 19:03:08 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1220158 /var/tmp/spdk-nbd.sock 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1220158 ']' 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:07:03.257 19:03:08 event.app_repeat -- event/event.sh@39 -- # killprocess 1220158 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1220158 ']' 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1220158 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1220158 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1220158' 00:07:03.257 killing process with pid 1220158 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1220158 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1220158 00:07:03.257 spdk_app_start is called in Round 0. 00:07:03.257 Shutdown signal received, stop current app iteration 00:07:03.257 Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 reinitialization... 00:07:03.257 spdk_app_start is called in Round 1. 00:07:03.257 Shutdown signal received, stop current app iteration 00:07:03.257 Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 reinitialization... 00:07:03.257 spdk_app_start is called in Round 2. 00:07:03.257 Shutdown signal received, stop current app iteration 00:07:03.257 Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 reinitialization... 00:07:03.257 spdk_app_start is called in Round 3. 00:07:03.257 Shutdown signal received, stop current app iteration 00:07:03.257 19:03:08 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:03.257 19:03:08 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:03.257 00:07:03.257 real 0m15.613s 00:07:03.257 user 0m33.671s 00:07:03.257 sys 0m2.070s 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.257 19:03:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:03.257 ************************************ 00:07:03.257 END TEST app_repeat 00:07:03.257 ************************************ 00:07:03.257 19:03:09 event -- common/autotest_common.sh@1142 -- # return 0 00:07:03.257 19:03:09 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:03.258 19:03:09 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:03.258 19:03:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.258 19:03:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.258 19:03:09 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.258 ************************************ 00:07:03.258 START TEST cpu_locks 00:07:03.258 ************************************ 00:07:03.258 19:03:09 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:03.258 * Looking for test storage... 00:07:03.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:03.258 19:03:09 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:03.258 19:03:09 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:03.258 19:03:09 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:03.258 19:03:09 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:03.258 19:03:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:03.258 19:03:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.258 19:03:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.258 ************************************ 00:07:03.258 START TEST default_locks 00:07:03.258 ************************************ 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1223650 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1223650 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1223650 ']' 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.258 19:03:09 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:03.258 [2024-07-12 19:03:09.246853] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:03.258 [2024-07-12 19:03:09.246921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1223650 ] 00:07:03.258 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.258 [2024-07-12 19:03:09.310076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.258 [2024-07-12 19:03:09.387159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1223650 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1223650 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.203 lslocks: write error 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1223650 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1223650 ']' 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1223650 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1223650 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1223650' 00:07:04.203 killing process with pid 1223650 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1223650 00:07:04.203 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1223650 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1223650 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1223650 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1223650 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1223650 ']' 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.464 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1223650) - No such process 00:07:04.464 ERROR: process (pid: 1223650) is no longer running 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:04.464 00:07:04.464 real 0m1.299s 00:07:04.464 user 0m1.376s 00:07:04.464 sys 0m0.424s 00:07:04.464 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.465 19:03:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.465 ************************************ 00:07:04.465 END TEST default_locks 00:07:04.465 ************************************ 00:07:04.465 19:03:10 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:04.465 19:03:10 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:04.465 19:03:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:04.465 19:03:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.465 19:03:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:04.465 ************************************ 00:07:04.465 START TEST default_locks_via_rpc 00:07:04.465 ************************************ 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1224233 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1224233 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1224233 ']' 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:04.465 19:03:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.725 [2024-07-12 19:03:10.618689] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:04.725 [2024-07-12 19:03:10.618740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224233 ] 00:07:04.725 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.725 [2024-07-12 19:03:10.679590] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.725 [2024-07-12 19:03:10.746553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1224233 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1224233 00:07:05.297 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1224233 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1224233 ']' 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1224233 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1224233 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1224233' 00:07:05.868 killing process with pid 1224233 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1224233 00:07:05.868 19:03:11 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1224233 00:07:06.129 00:07:06.129 real 0m1.580s 00:07:06.129 user 0m1.674s 00:07:06.129 sys 0m0.529s 00:07:06.129 19:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.129 19:03:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.129 ************************************ 00:07:06.129 END TEST default_locks_via_rpc 00:07:06.129 ************************************ 00:07:06.129 19:03:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:06.129 19:03:12 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:06.129 19:03:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.129 19:03:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.129 19:03:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.129 ************************************ 00:07:06.129 START TEST non_locking_app_on_locked_coremask 00:07:06.129 ************************************ 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1224709 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1224709 /var/tmp/spdk.sock 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1224709 ']' 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.129 19:03:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:06.390 [2024-07-12 19:03:12.265193] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:06.390 [2024-07-12 19:03:12.265246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224709 ] 00:07:06.390 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.390 [2024-07-12 19:03:12.324433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.390 [2024-07-12 19:03:12.392758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1224978 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1224978 /var/tmp/spdk2.sock 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1224978 ']' 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:06.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.966 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:06.966 [2024-07-12 19:03:13.069720] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:06.966 [2024-07-12 19:03:13.069772] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1224978 ] 00:07:06.966 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.227 [2024-07-12 19:03:13.157006] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:07.227 [2024-07-12 19:03:13.157033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.227 [2024-07-12 19:03:13.286045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.797 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.797 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:07.797 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1224709 00:07:07.797 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:07.797 19:03:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1224709 00:07:08.368 lslocks: write error 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1224709 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1224709 ']' 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1224709 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1224709 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1224709' 00:07:08.368 killing process with pid 1224709 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1224709 00:07:08.368 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1224709 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1224978 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1224978 ']' 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1224978 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1224978 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1224978' 00:07:08.938 killing process with pid 1224978 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1224978 00:07:08.938 19:03:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1224978 00:07:08.938 00:07:08.938 real 0m2.857s 00:07:08.938 user 0m3.118s 00:07:08.938 sys 0m0.852s 00:07:08.938 19:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.938 19:03:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:08.938 ************************************ 00:07:08.938 END TEST non_locking_app_on_locked_coremask 00:07:08.938 ************************************ 00:07:09.199 19:03:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:09.199 19:03:15 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:09.199 19:03:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.199 19:03:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.199 19:03:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:09.199 ************************************ 00:07:09.199 START TEST locking_app_on_unlocked_coremask 00:07:09.199 ************************************ 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1225416 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1225416 /var/tmp/spdk.sock 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1225416 ']' 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:09.199 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:09.199 [2024-07-12 19:03:15.195111] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:09.199 [2024-07-12 19:03:15.195165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225416 ] 00:07:09.199 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.199 [2024-07-12 19:03:15.254066] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:09.199 [2024-07-12 19:03:15.254100] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.199 [2024-07-12 19:03:15.317984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1225460 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1225460 /var/tmp/spdk2.sock 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1225460 ']' 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:10.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.142 19:03:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.142 [2024-07-12 19:03:16.020918] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:10.142 [2024-07-12 19:03:16.020971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225460 ] 00:07:10.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.142 [2024-07-12 19:03:16.110475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.142 [2024-07-12 19:03:16.243878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.713 19:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.713 19:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:10.713 19:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1225460 00:07:10.713 19:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1225460 00:07:10.713 19:03:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:10.973 lslocks: write error 00:07:10.973 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1225416 00:07:10.973 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1225416 ']' 00:07:10.973 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1225416 00:07:10.973 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:10.973 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:10.973 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225416 00:07:11.233 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.233 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.233 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225416' 00:07:11.233 killing process with pid 1225416 00:07:11.233 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1225416 00:07:11.233 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1225416 00:07:11.494 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1225460 00:07:11.494 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1225460 ']' 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1225460 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225460 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225460' 00:07:11.495 killing process with pid 1225460 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1225460 00:07:11.495 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1225460 00:07:11.755 00:07:11.755 real 0m2.695s 00:07:11.755 user 0m2.985s 00:07:11.755 sys 0m0.764s 00:07:11.755 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.755 19:03:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:11.755 ************************************ 00:07:11.756 END TEST locking_app_on_unlocked_coremask 00:07:11.756 ************************************ 00:07:11.756 19:03:17 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:11.756 19:03:17 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:11.756 19:03:17 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.756 19:03:17 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.756 19:03:17 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:12.017 ************************************ 00:07:12.017 START TEST locking_app_on_locked_coremask 00:07:12.017 ************************************ 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1225993 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1225993 /var/tmp/spdk.sock 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1225993 ']' 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.017 19:03:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.017 [2024-07-12 19:03:17.962416] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:12.017 [2024-07-12 19:03:17.962463] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225993 ] 00:07:12.017 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.017 [2024-07-12 19:03:18.020749] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.017 [2024-07-12 19:03:18.085054] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1226129 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1226129 /var/tmp/spdk2.sock 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1226129 /var/tmp/spdk2.sock 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1226129 /var/tmp/spdk2.sock 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1226129 ']' 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:12.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.959 19:03:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:12.959 [2024-07-12 19:03:18.782094] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:12.959 [2024-07-12 19:03:18.782155] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226129 ] 00:07:12.959 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.959 [2024-07-12 19:03:18.869963] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1225993 has claimed it. 00:07:12.959 [2024-07-12 19:03:18.870005] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:13.530 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1226129) - No such process 00:07:13.530 ERROR: process (pid: 1226129) is no longer running 00:07:13.530 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.530 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:13.530 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:13.530 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:13.530 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:13.530 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:13.530 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1225993 00:07:13.530 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1225993 00:07:13.531 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.791 lslocks: write error 00:07:13.791 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1225993 00:07:13.791 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1225993 ']' 00:07:13.791 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1225993 00:07:13.791 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:07:13.791 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:13.792 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1225993 00:07:13.792 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:13.792 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:13.792 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1225993' 00:07:13.792 killing process with pid 1225993 00:07:13.792 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1225993 00:07:13.792 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1225993 00:07:14.052 00:07:14.052 real 0m2.045s 00:07:14.052 user 0m2.283s 00:07:14.052 sys 0m0.542s 00:07:14.053 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.053 19:03:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.053 ************************************ 00:07:14.053 END TEST locking_app_on_locked_coremask 00:07:14.053 ************************************ 00:07:14.053 19:03:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:14.053 19:03:19 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:14.053 19:03:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:14.053 19:03:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.053 19:03:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:14.053 ************************************ 00:07:14.053 START TEST locking_overlapped_coremask 00:07:14.053 ************************************ 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1226507 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1226507 /var/tmp/spdk.sock 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1226507 ']' 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.053 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.053 [2024-07-12 19:03:20.078424] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:14.053 [2024-07-12 19:03:20.078472] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226507 ] 00:07:14.053 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.053 [2024-07-12 19:03:20.138217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.313 [2024-07-12 19:03:20.205157] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.313 [2024-07-12 19:03:20.205232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.313 [2024-07-12 19:03:20.205235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.884 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:14.884 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:07:14.884 19:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1226534 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1226534 /var/tmp/spdk2.sock 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1226534 /var/tmp/spdk2.sock 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1226534 /var/tmp/spdk2.sock 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1226534 ']' 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:14.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.885 19:03:20 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:14.885 [2024-07-12 19:03:20.895615] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:14.885 [2024-07-12 19:03:20.895669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226534 ] 00:07:14.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.885 [2024-07-12 19:03:20.965523] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1226507 has claimed it. 00:07:14.885 [2024-07-12 19:03:20.965558] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:15.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1226534) - No such process 00:07:15.456 ERROR: process (pid: 1226534) is no longer running 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1226507 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1226507 ']' 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1226507 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:07:15.456 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:15.457 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226507 00:07:15.457 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:15.457 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:15.457 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226507' 00:07:15.457 killing process with pid 1226507 00:07:15.457 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1226507 00:07:15.457 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1226507 00:07:15.717 00:07:15.717 real 0m1.753s 00:07:15.717 user 0m4.981s 00:07:15.717 sys 0m0.353s 00:07:15.717 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.717 19:03:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:15.717 ************************************ 00:07:15.717 END TEST locking_overlapped_coremask 00:07:15.717 ************************************ 00:07:15.717 19:03:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:15.717 19:03:21 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:15.717 19:03:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:15.717 19:03:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.717 19:03:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:15.978 ************************************ 00:07:15.978 START TEST locking_overlapped_coremask_via_rpc 00:07:15.978 ************************************ 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1226887 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1226887 /var/tmp/spdk.sock 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1226887 ']' 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:15.978 19:03:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:15.978 [2024-07-12 19:03:21.904346] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:15.978 [2024-07-12 19:03:21.904399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226887 ] 00:07:15.978 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.978 [2024-07-12 19:03:21.963105] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:15.978 [2024-07-12 19:03:21.963137] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.978 [2024-07-12 19:03:22.031669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.978 [2024-07-12 19:03:22.031783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.978 [2024-07-12 19:03:22.031786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1226946 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1226946 /var/tmp/spdk2.sock 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1226946 ']' 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:16.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.551 19:03:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.812 [2024-07-12 19:03:22.716445] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:16.812 [2024-07-12 19:03:22.716501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1226946 ] 00:07:16.812 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.812 [2024-07-12 19:03:22.792404] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.812 [2024-07-12 19:03:22.792428] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:16.812 [2024-07-12 19:03:22.898194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:16.812 [2024-07-12 19:03:22.901244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:16.812 [2024-07-12 19:03:22.901246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.384 [2024-07-12 19:03:23.493185] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1226887 has claimed it. 00:07:17.384 request: 00:07:17.384 { 00:07:17.384 "method": "framework_enable_cpumask_locks", 00:07:17.384 "req_id": 1 00:07:17.384 } 00:07:17.384 Got JSON-RPC error response 00:07:17.384 response: 00:07:17.384 { 00:07:17.384 "code": -32603, 00:07:17.384 "message": "Failed to claim CPU core: 2" 00:07:17.384 } 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1226887 /var/tmp/spdk.sock 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1226887 ']' 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:17.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.384 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1226946 /var/tmp/spdk2.sock 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1226946 ']' 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.645 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.906 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.906 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:17.906 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:17.906 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:17.906 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:17.906 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:17.906 00:07:17.906 real 0m1.993s 00:07:17.906 user 0m0.767s 00:07:17.906 sys 0m0.151s 00:07:17.906 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:17.906 19:03:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:17.906 ************************************ 00:07:17.906 END TEST locking_overlapped_coremask_via_rpc 00:07:17.906 ************************************ 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:07:17.906 19:03:23 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:17.906 19:03:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1226887 ]] 00:07:17.906 19:03:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1226887 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1226887 ']' 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1226887 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226887 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226887' 00:07:17.906 killing process with pid 1226887 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1226887 00:07:17.906 19:03:23 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1226887 00:07:18.167 19:03:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1226946 ]] 00:07:18.167 19:03:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1226946 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1226946 ']' 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1226946 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1226946 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1226946' 00:07:18.167 killing process with pid 1226946 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1226946 00:07:18.167 19:03:24 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1226946 00:07:18.429 19:03:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.429 19:03:24 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:18.429 19:03:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1226887 ]] 00:07:18.429 19:03:24 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1226887 00:07:18.429 19:03:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1226887 ']' 00:07:18.429 19:03:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1226887 00:07:18.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1226887) - No such process 00:07:18.429 19:03:24 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1226887 is not found' 00:07:18.429 Process with pid 1226887 is not found 00:07:18.429 19:03:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1226946 ]] 00:07:18.429 19:03:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1226946 00:07:18.429 19:03:24 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1226946 ']' 00:07:18.429 19:03:24 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1226946 00:07:18.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1226946) - No such process 00:07:18.429 19:03:24 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1226946 is not found' 00:07:18.429 Process with pid 1226946 is not found 00:07:18.429 19:03:24 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.429 00:07:18.429 real 0m15.350s 00:07:18.429 user 0m26.709s 00:07:18.429 sys 0m4.471s 00:07:18.429 19:03:24 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.429 19:03:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.429 ************************************ 00:07:18.429 END TEST cpu_locks 00:07:18.429 ************************************ 00:07:18.429 19:03:24 event -- common/autotest_common.sh@1142 -- # return 0 00:07:18.429 00:07:18.429 real 0m40.866s 00:07:18.429 user 1m19.753s 00:07:18.429 sys 0m7.491s 00:07:18.429 19:03:24 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.429 19:03:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.429 ************************************ 00:07:18.429 END TEST event 00:07:18.429 ************************************ 00:07:18.429 19:03:24 -- common/autotest_common.sh@1142 -- # return 0 00:07:18.429 19:03:24 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.429 19:03:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:18.429 19:03:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.429 19:03:24 -- common/autotest_common.sh@10 -- # set +x 00:07:18.429 ************************************ 00:07:18.429 START TEST thread 00:07:18.429 ************************************ 00:07:18.429 19:03:24 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:18.690 * Looking for test storage... 00:07:18.690 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:18.690 19:03:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.690 19:03:24 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:18.690 19:03:24 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.690 19:03:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.690 ************************************ 00:07:18.690 START TEST thread_poller_perf 00:07:18.690 ************************************ 00:07:18.690 19:03:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.690 [2024-07-12 19:03:24.667137] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:18.690 [2024-07-12 19:03:24.667236] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227527 ] 00:07:18.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.690 [2024-07-12 19:03:24.737556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.690 [2024-07-12 19:03:24.812266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.690 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:20.083 ====================================== 00:07:20.084 busy:2410062198 (cyc) 00:07:20.084 total_run_count: 287000 00:07:20.084 tsc_hz: 2400000000 (cyc) 00:07:20.084 ====================================== 00:07:20.084 poller_cost: 8397 (cyc), 3498 (nsec) 00:07:20.084 00:07:20.084 real 0m1.228s 00:07:20.084 user 0m1.144s 00:07:20.084 sys 0m0.080s 00:07:20.084 19:03:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:20.084 19:03:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.084 ************************************ 00:07:20.084 END TEST thread_poller_perf 00:07:20.084 ************************************ 00:07:20.084 19:03:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:20.084 19:03:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.084 19:03:25 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:20.084 19:03:25 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.084 19:03:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.084 ************************************ 00:07:20.084 START TEST thread_poller_perf 00:07:20.084 ************************************ 00:07:20.084 19:03:25 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.084 [2024-07-12 19:03:25.973302] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:20.084 [2024-07-12 19:03:25.973403] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1227710 ] 00:07:20.084 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.084 [2024-07-12 19:03:26.035518] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.084 [2024-07-12 19:03:26.100525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.084 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:21.028 ====================================== 00:07:21.029 busy:2402106312 (cyc) 00:07:21.029 total_run_count: 3801000 00:07:21.029 tsc_hz: 2400000000 (cyc) 00:07:21.029 ====================================== 00:07:21.029 poller_cost: 631 (cyc), 262 (nsec) 00:07:21.029 00:07:21.029 real 0m1.201s 00:07:21.029 user 0m1.136s 00:07:21.029 sys 0m0.061s 00:07:21.029 19:03:27 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.029 19:03:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.029 ************************************ 00:07:21.029 END TEST thread_poller_perf 00:07:21.029 ************************************ 00:07:21.290 19:03:27 thread -- common/autotest_common.sh@1142 -- # return 0 00:07:21.290 19:03:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:21.290 00:07:21.290 real 0m2.682s 00:07:21.290 user 0m2.371s 00:07:21.290 sys 0m0.318s 00:07:21.290 19:03:27 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.290 19:03:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.290 ************************************ 00:07:21.290 END TEST thread 00:07:21.290 ************************************ 00:07:21.290 19:03:27 -- common/autotest_common.sh@1142 -- # return 0 00:07:21.290 19:03:27 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:21.290 19:03:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:21.290 19:03:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.290 19:03:27 -- common/autotest_common.sh@10 -- # set +x 00:07:21.290 ************************************ 00:07:21.290 START TEST accel 00:07:21.290 ************************************ 00:07:21.290 19:03:27 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:07:21.290 * Looking for test storage... 00:07:21.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:21.290 19:03:27 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:07:21.290 19:03:27 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:07:21.290 19:03:27 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:21.290 19:03:27 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1228084 00:07:21.290 19:03:27 accel -- accel/accel.sh@63 -- # waitforlisten 1228084 00:07:21.290 19:03:27 accel -- common/autotest_common.sh@829 -- # '[' -z 1228084 ']' 00:07:21.290 19:03:27 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.290 19:03:27 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.290 19:03:27 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.290 19:03:27 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:21.290 19:03:27 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.290 19:03:27 accel -- accel/accel.sh@61 -- # build_accel_config 00:07:21.290 19:03:27 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.290 19:03:27 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.290 19:03:27 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.290 19:03:27 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.290 19:03:27 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.290 19:03:27 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.290 19:03:27 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:21.290 19:03:27 accel -- accel/accel.sh@41 -- # jq -r . 00:07:21.552 [2024-07-12 19:03:27.431465] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:21.552 [2024-07-12 19:03:27.431533] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228084 ] 00:07:21.552 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.552 [2024-07-12 19:03:27.496818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.552 [2024-07-12 19:03:27.571586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.124 19:03:28 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:22.124 19:03:28 accel -- common/autotest_common.sh@862 -- # return 0 00:07:22.124 19:03:28 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:07:22.124 19:03:28 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:07:22.124 19:03:28 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:07:22.124 19:03:28 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:07:22.124 19:03:28 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:22.124 19:03:28 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:07:22.124 19:03:28 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:22.124 19:03:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.124 19:03:28 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:22.124 19:03:28 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:22.124 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.124 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.124 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.124 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.124 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.124 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.124 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # IFS== 00:07:22.385 19:03:28 accel -- accel/accel.sh@72 -- # read -r opc module 00:07:22.385 19:03:28 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:07:22.385 19:03:28 accel -- accel/accel.sh@75 -- # killprocess 1228084 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@948 -- # '[' -z 1228084 ']' 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@952 -- # kill -0 1228084 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@953 -- # uname 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1228084 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1228084' 00:07:22.385 killing process with pid 1228084 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@967 -- # kill 1228084 00:07:22.385 19:03:28 accel -- common/autotest_common.sh@972 -- # wait 1228084 00:07:22.647 19:03:28 accel -- accel/accel.sh@76 -- # trap - ERR 00:07:22.647 19:03:28 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:07:22.647 19:03:28 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:22.647 19:03:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.647 19:03:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.647 19:03:28 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:07:22.647 19:03:28 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:07:22.647 19:03:28 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.647 19:03:28 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:07:22.647 19:03:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.647 19:03:28 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:22.647 19:03:28 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:22.647 19:03:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.647 19:03:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.647 ************************************ 00:07:22.647 START TEST accel_missing_filename 00:07:22.647 ************************************ 00:07:22.647 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:07:22.647 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:07:22.647 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:22.647 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:22.647 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.647 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:22.647 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.647 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:07:22.647 19:03:28 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:07:22.647 [2024-07-12 19:03:28.692724] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:22.647 [2024-07-12 19:03:28.692806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228450 ] 00:07:22.647 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.647 [2024-07-12 19:03:28.757046] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.908 [2024-07-12 19:03:28.829937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.908 [2024-07-12 19:03:28.862231] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:22.908 [2024-07-12 19:03:28.899175] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:22.908 A filename is required. 00:07:22.908 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:07:22.908 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:22.908 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:07:22.908 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:07:22.908 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:07:22.908 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:22.908 00:07:22.908 real 0m0.290s 00:07:22.908 user 0m0.223s 00:07:22.908 sys 0m0.106s 00:07:22.908 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.908 19:03:28 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:07:22.908 ************************************ 00:07:22.908 END TEST accel_missing_filename 00:07:22.908 ************************************ 00:07:22.908 19:03:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.908 19:03:28 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.908 19:03:28 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:22.908 19:03:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.908 19:03:28 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.908 ************************************ 00:07:22.908 START TEST accel_compress_verify 00:07:22.908 ************************************ 00:07:22.908 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.908 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:07:22.908 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.908 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:22.908 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.908 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:22.908 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:22.908 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:22.908 19:03:29 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:07:23.169 [2024-07-12 19:03:29.058527] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:23.170 [2024-07-12 19:03:29.058603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228479 ] 00:07:23.170 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.170 [2024-07-12 19:03:29.121754] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.170 [2024-07-12 19:03:29.189539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.170 [2024-07-12 19:03:29.221366] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:23.170 [2024-07-12 19:03:29.258129] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:07:23.432 00:07:23.432 Compression does not support the verify option, aborting. 00:07:23.432 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:07:23.432 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.432 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:07:23.432 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:07:23.432 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:07:23.432 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.432 00:07:23.432 real 0m0.284s 00:07:23.432 user 0m0.218s 00:07:23.432 sys 0m0.106s 00:07:23.432 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.432 19:03:29 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 ************************************ 00:07:23.432 END TEST accel_compress_verify 00:07:23.432 ************************************ 00:07:23.432 19:03:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.432 19:03:29 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:23.432 19:03:29 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:23.432 19:03:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.432 19:03:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 ************************************ 00:07:23.432 START TEST accel_wrong_workload 00:07:23.432 ************************************ 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:07:23.432 19:03:29 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:07:23.432 Unsupported workload type: foobar 00:07:23.432 [2024-07-12 19:03:29.416830] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:23.432 accel_perf options: 00:07:23.432 [-h help message] 00:07:23.432 [-q queue depth per core] 00:07:23.432 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:23.432 [-T number of threads per core 00:07:23.432 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:23.432 [-t time in seconds] 00:07:23.432 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:23.432 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:23.432 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:23.432 [-l for compress/decompress workloads, name of uncompressed input file 00:07:23.432 [-S for crc32c workload, use this seed value (default 0) 00:07:23.432 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:23.432 [-f for fill workload, use this BYTE value (default 255) 00:07:23.432 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:23.432 [-y verify result if this switch is on] 00:07:23.432 [-a tasks to allocate per core (default: same value as -q)] 00:07:23.432 Can be used to spread operations across a wider range of memory. 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.432 00:07:23.432 real 0m0.036s 00:07:23.432 user 0m0.021s 00:07:23.432 sys 0m0.015s 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.432 19:03:29 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 ************************************ 00:07:23.432 END TEST accel_wrong_workload 00:07:23.432 ************************************ 00:07:23.432 Error: writing output failed: Broken pipe 00:07:23.432 19:03:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.432 19:03:29 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:23.432 19:03:29 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:07:23.432 19:03:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.432 19:03:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.432 ************************************ 00:07:23.432 START TEST accel_negative_buffers 00:07:23.432 ************************************ 00:07:23.432 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:23.432 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:07:23.432 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:23.432 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:07:23.432 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.432 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:07:23.432 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:23.432 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:07:23.432 19:03:29 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:07:23.432 -x option must be non-negative. 00:07:23.432 [2024-07-12 19:03:29.530582] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:23.432 accel_perf options: 00:07:23.432 [-h help message] 00:07:23.432 [-q queue depth per core] 00:07:23.432 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:23.432 [-T number of threads per core 00:07:23.432 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:23.432 [-t time in seconds] 00:07:23.432 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:23.433 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:07:23.433 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:23.433 [-l for compress/decompress workloads, name of uncompressed input file 00:07:23.433 [-S for crc32c workload, use this seed value (default 0) 00:07:23.433 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:23.433 [-f for fill workload, use this BYTE value (default 255) 00:07:23.433 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:23.433 [-y verify result if this switch is on] 00:07:23.433 [-a tasks to allocate per core (default: same value as -q)] 00:07:23.433 Can be used to spread operations across a wider range of memory. 00:07:23.433 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:07:23.433 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:23.433 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:23.433 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:23.433 00:07:23.433 real 0m0.035s 00:07:23.433 user 0m0.021s 00:07:23.433 sys 0m0.014s 00:07:23.433 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.433 19:03:29 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:07:23.433 ************************************ 00:07:23.433 END TEST accel_negative_buffers 00:07:23.433 ************************************ 00:07:23.433 Error: writing output failed: Broken pipe 00:07:23.694 19:03:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:23.694 19:03:29 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:23.694 19:03:29 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:23.694 19:03:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.694 19:03:29 accel -- common/autotest_common.sh@10 -- # set +x 00:07:23.694 ************************************ 00:07:23.694 START TEST accel_crc32c 00:07:23.694 ************************************ 00:07:23.694 19:03:29 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:23.694 [2024-07-12 19:03:29.640410] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:23.694 [2024-07-12 19:03:29.640484] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228676 ] 00:07:23.694 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.694 [2024-07-12 19:03:29.701526] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.694 [2024-07-12 19:03:29.768580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:23.694 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:23.695 19:03:29 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:25.111 19:03:30 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.111 00:07:25.111 real 0m1.285s 00:07:25.111 user 0m1.195s 00:07:25.111 sys 0m0.101s 00:07:25.111 19:03:30 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.111 19:03:30 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:25.111 ************************************ 00:07:25.111 END TEST accel_crc32c 00:07:25.111 ************************************ 00:07:25.111 19:03:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.111 19:03:30 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:25.111 19:03:30 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:25.111 19:03:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.111 19:03:30 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.111 ************************************ 00:07:25.111 START TEST accel_crc32c_C2 00:07:25.111 ************************************ 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:25.111 19:03:30 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:25.111 [2024-07-12 19:03:31.002572] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:25.111 [2024-07-12 19:03:31.002669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1228891 ] 00:07:25.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.111 [2024-07-12 19:03:31.066024] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.111 [2024-07-12 19:03:31.139340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.111 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.112 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.112 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.112 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:25.112 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:25.112 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.112 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:25.112 19:03:31 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.495 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.496 00:07:26.496 real 0m1.295s 00:07:26.496 user 0m1.195s 00:07:26.496 sys 0m0.111s 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.496 19:03:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:26.496 ************************************ 00:07:26.496 END TEST accel_crc32c_C2 00:07:26.496 ************************************ 00:07:26.496 19:03:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.496 19:03:32 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:26.496 19:03:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:26.496 19:03:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.496 19:03:32 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.496 ************************************ 00:07:26.496 START TEST accel_copy 00:07:26.496 ************************************ 00:07:26.496 19:03:32 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:07:26.496 [2024-07-12 19:03:32.373610] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:26.496 [2024-07-12 19:03:32.373691] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229246 ] 00:07:26.496 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.496 [2024-07-12 19:03:32.435032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.496 [2024-07-12 19:03:32.502232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:26.496 19:03:32 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:27.880 19:03:33 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:27.880 00:07:27.880 real 0m1.285s 00:07:27.880 user 0m1.195s 00:07:27.880 sys 0m0.100s 00:07:27.880 19:03:33 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.880 19:03:33 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:27.880 ************************************ 00:07:27.880 END TEST accel_copy 00:07:27.880 ************************************ 00:07:27.880 19:03:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:27.881 19:03:33 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.881 19:03:33 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:27.881 19:03:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.881 19:03:33 accel -- common/autotest_common.sh@10 -- # set +x 00:07:27.881 ************************************ 00:07:27.881 START TEST accel_fill 00:07:27.881 ************************************ 00:07:27.881 19:03:33 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:27.881 [2024-07-12 19:03:33.735978] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:27.881 [2024-07-12 19:03:33.736058] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229596 ] 00:07:27.881 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.881 [2024-07-12 19:03:33.796621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.881 [2024-07-12 19:03:33.860837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:27.881 19:03:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:29.265 19:03:34 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.265 00:07:29.265 real 0m1.282s 00:07:29.265 user 0m1.192s 00:07:29.265 sys 0m0.101s 00:07:29.265 19:03:34 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:29.265 19:03:34 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:29.265 ************************************ 00:07:29.265 END TEST accel_fill 00:07:29.265 ************************************ 00:07:29.265 19:03:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:29.265 19:03:35 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:29.265 19:03:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:29.265 19:03:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.265 19:03:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:29.265 ************************************ 00:07:29.265 START TEST accel_copy_crc32c 00:07:29.265 ************************************ 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:29.265 [2024-07-12 19:03:35.096324] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:29.265 [2024-07-12 19:03:35.096417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229943 ] 00:07:29.265 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.265 [2024-07-12 19:03:35.157386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.265 [2024-07-12 19:03:35.223562] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:29.265 19:03:35 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:30.652 00:07:30.652 real 0m1.285s 00:07:30.652 user 0m1.198s 00:07:30.652 sys 0m0.100s 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.652 19:03:36 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:30.652 ************************************ 00:07:30.652 END TEST accel_copy_crc32c 00:07:30.652 ************************************ 00:07:30.652 19:03:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:30.652 19:03:36 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:30.652 19:03:36 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:30.652 19:03:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.652 19:03:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:30.652 ************************************ 00:07:30.652 START TEST accel_copy_crc32c_C2 00:07:30.652 ************************************ 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:30.652 [2024-07-12 19:03:36.458723] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:30.652 [2024-07-12 19:03:36.458786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230151 ] 00:07:30.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.652 [2024-07-12 19:03:36.521196] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.652 [2024-07-12 19:03:36.587515] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:30.652 19:03:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.594 00:07:31.594 real 0m1.288s 00:07:31.594 user 0m1.190s 00:07:31.594 sys 0m0.112s 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.594 19:03:37 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:31.594 ************************************ 00:07:31.594 END TEST accel_copy_crc32c_C2 00:07:31.594 ************************************ 00:07:31.855 19:03:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:31.855 19:03:37 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:31.855 19:03:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:31.855 19:03:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.855 19:03:37 accel -- common/autotest_common.sh@10 -- # set +x 00:07:31.855 ************************************ 00:07:31.855 START TEST accel_dualcast 00:07:31.855 ************************************ 00:07:31.855 19:03:37 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:31.855 19:03:37 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:31.855 [2024-07-12 19:03:37.824374] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:31.855 [2024-07-12 19:03:37.824439] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230355 ] 00:07:31.855 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.855 [2024-07-12 19:03:37.886353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.855 [2024-07-12 19:03:37.955589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:32.116 19:03:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:33.061 19:03:39 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:33.061 00:07:33.061 real 0m1.288s 00:07:33.061 user 0m1.202s 00:07:33.061 sys 0m0.098s 00:07:33.061 19:03:39 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:33.061 19:03:39 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:33.061 ************************************ 00:07:33.061 END TEST accel_dualcast 00:07:33.061 ************************************ 00:07:33.061 19:03:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:33.061 19:03:39 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:33.061 19:03:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:33.061 19:03:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:33.061 19:03:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:33.061 ************************************ 00:07:33.061 START TEST accel_compare 00:07:33.061 ************************************ 00:07:33.061 19:03:39 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:33.061 19:03:39 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:33.061 [2024-07-12 19:03:39.188534] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:33.061 [2024-07-12 19:03:39.188622] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230682 ] 00:07:33.322 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.322 [2024-07-12 19:03:39.249460] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.322 [2024-07-12 19:03:39.313748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:33.322 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:33.323 19:03:39 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:34.707 19:03:40 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.707 00:07:34.707 real 0m1.283s 00:07:34.707 user 0m1.191s 00:07:34.708 sys 0m0.102s 00:07:34.708 19:03:40 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.708 19:03:40 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:34.708 ************************************ 00:07:34.708 END TEST accel_compare 00:07:34.708 ************************************ 00:07:34.708 19:03:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:34.708 19:03:40 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:34.708 19:03:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:34.708 19:03:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.708 19:03:40 accel -- common/autotest_common.sh@10 -- # set +x 00:07:34.708 ************************************ 00:07:34.708 START TEST accel_xor 00:07:34.708 ************************************ 00:07:34.708 19:03:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:34.708 [2024-07-12 19:03:40.547766] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:34.708 [2024-07-12 19:03:40.547845] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231037 ] 00:07:34.708 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.708 [2024-07-12 19:03:40.607934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.708 [2024-07-12 19:03:40.671358] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:34.708 19:03:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:36.092 19:03:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:36.092 00:07:36.092 real 0m1.280s 00:07:36.092 user 0m1.194s 00:07:36.092 sys 0m0.098s 00:07:36.092 19:03:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:36.092 19:03:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:36.092 ************************************ 00:07:36.093 END TEST accel_xor 00:07:36.093 ************************************ 00:07:36.093 19:03:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:36.093 19:03:41 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:36.093 19:03:41 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:36.093 19:03:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.093 19:03:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:36.093 ************************************ 00:07:36.093 START TEST accel_xor 00:07:36.093 ************************************ 00:07:36.093 19:03:41 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:36.093 19:03:41 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:36.093 [2024-07-12 19:03:41.904110] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:36.093 [2024-07-12 19:03:41.904175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231384 ] 00:07:36.093 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.093 [2024-07-12 19:03:41.962900] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.093 [2024-07-12 19:03:42.025583] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:36.093 19:03:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:37.035 19:03:43 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.035 00:07:37.035 real 0m1.277s 00:07:37.035 user 0m1.194s 00:07:37.035 sys 0m0.095s 00:07:37.035 19:03:43 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:37.035 19:03:43 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:37.035 ************************************ 00:07:37.035 END TEST accel_xor 00:07:37.035 ************************************ 00:07:37.295 19:03:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:37.295 19:03:43 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:37.295 19:03:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:37.295 19:03:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.295 19:03:43 accel -- common/autotest_common.sh@10 -- # set +x 00:07:37.295 ************************************ 00:07:37.295 START TEST accel_dif_verify 00:07:37.295 ************************************ 00:07:37.295 19:03:43 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:37.295 19:03:43 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:37.295 [2024-07-12 19:03:43.258593] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:37.295 [2024-07-12 19:03:43.258666] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231621 ] 00:07:37.295 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.295 [2024-07-12 19:03:43.331633] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.295 [2024-07-12 19:03:43.403376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:37.556 19:03:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:38.497 19:03:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:38.498 19:03:44 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.498 00:07:38.498 real 0m1.303s 00:07:38.498 user 0m1.200s 00:07:38.498 sys 0m0.116s 00:07:38.498 19:03:44 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:38.498 19:03:44 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:38.498 ************************************ 00:07:38.498 END TEST accel_dif_verify 00:07:38.498 ************************************ 00:07:38.498 19:03:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:38.498 19:03:44 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:38.498 19:03:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:38.498 19:03:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.498 19:03:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:38.498 ************************************ 00:07:38.498 START TEST accel_dif_generate 00:07:38.498 ************************************ 00:07:38.498 19:03:44 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:38.498 19:03:44 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:38.758 [2024-07-12 19:03:44.637724] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:38.758 [2024-07-12 19:03:44.637799] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1231818 ] 00:07:38.758 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.758 [2024-07-12 19:03:44.699332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.758 [2024-07-12 19:03:44.766121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:38.758 19:03:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.155 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:40.156 19:03:45 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.156 00:07:40.156 real 0m1.285s 00:07:40.156 user 0m1.203s 00:07:40.156 sys 0m0.096s 00:07:40.156 19:03:45 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:40.156 19:03:45 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:40.156 ************************************ 00:07:40.156 END TEST accel_dif_generate 00:07:40.156 ************************************ 00:07:40.156 19:03:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:40.156 19:03:45 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:40.156 19:03:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:40.156 19:03:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.156 19:03:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:40.156 ************************************ 00:07:40.156 START TEST accel_dif_generate_copy 00:07:40.156 ************************************ 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:40.156 19:03:45 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:40.156 [2024-07-12 19:03:46.000292] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:40.156 [2024-07-12 19:03:46.000361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232125 ] 00:07:40.156 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.156 [2024-07-12 19:03:46.060461] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.156 [2024-07-12 19:03:46.125363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:40.156 19:03:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:41.541 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:41.542 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:41.542 19:03:47 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:41.542 00:07:41.542 real 0m1.282s 00:07:41.542 user 0m1.193s 00:07:41.542 sys 0m0.101s 00:07:41.542 19:03:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:41.542 19:03:47 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:41.542 ************************************ 00:07:41.542 END TEST accel_dif_generate_copy 00:07:41.542 ************************************ 00:07:41.542 19:03:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:41.542 19:03:47 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:41.542 19:03:47 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.542 19:03:47 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:41.542 19:03:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.542 19:03:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:41.542 ************************************ 00:07:41.542 START TEST accel_comp 00:07:41.542 ************************************ 00:07:41.542 19:03:47 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:41.542 [2024-07-12 19:03:47.357543] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:41.542 [2024-07-12 19:03:47.357606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232472 ] 00:07:41.542 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.542 [2024-07-12 19:03:47.418531] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.542 [2024-07-12 19:03:47.482837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:41.542 19:03:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:42.484 19:03:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:42.745 19:03:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:42.745 19:03:48 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.745 00:07:42.745 real 0m1.285s 00:07:42.745 user 0m1.205s 00:07:42.745 sys 0m0.093s 00:07:42.745 19:03:48 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.745 19:03:48 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:42.745 ************************************ 00:07:42.745 END TEST accel_comp 00:07:42.745 ************************************ 00:07:42.745 19:03:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:42.745 19:03:48 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:42.745 19:03:48 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:42.745 19:03:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.745 19:03:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:42.745 ************************************ 00:07:42.745 START TEST accel_decomp 00:07:42.745 ************************************ 00:07:42.745 19:03:48 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:42.745 19:03:48 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:42.745 [2024-07-12 19:03:48.720531] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:42.745 [2024-07-12 19:03:48.720620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232830 ] 00:07:42.745 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.745 [2024-07-12 19:03:48.781664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.745 [2024-07-12 19:03:48.847816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.006 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.007 19:03:48 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:43.948 19:03:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.948 00:07:43.948 real 0m1.288s 00:07:43.948 user 0m1.196s 00:07:43.948 sys 0m0.104s 00:07:43.948 19:03:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.948 19:03:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:43.948 ************************************ 00:07:43.948 END TEST accel_decomp 00:07:43.948 ************************************ 00:07:43.948 19:03:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:43.948 19:03:50 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.948 19:03:50 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:43.948 19:03:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.948 19:03:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:43.948 ************************************ 00:07:43.948 START TEST accel_decomp_full 00:07:43.948 ************************************ 00:07:43.948 19:03:50 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:43.948 19:03:50 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:44.210 [2024-07-12 19:03:50.087996] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:44.210 [2024-07-12 19:03:50.088086] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233120 ] 00:07:44.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.210 [2024-07-12 19:03:50.151684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.210 [2024-07-12 19:03:50.220021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:44.210 19:03:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:45.596 19:03:51 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.596 00:07:45.596 real 0m1.307s 00:07:45.596 user 0m1.214s 00:07:45.596 sys 0m0.106s 00:07:45.596 19:03:51 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:45.596 19:03:51 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:45.596 ************************************ 00:07:45.596 END TEST accel_decomp_full 00:07:45.596 ************************************ 00:07:45.596 19:03:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:45.596 19:03:51 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.596 19:03:51 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:45.596 19:03:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.596 19:03:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:45.596 ************************************ 00:07:45.596 START TEST accel_decomp_mcore 00:07:45.596 ************************************ 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:45.596 [2024-07-12 19:03:51.468453] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:45.596 [2024-07-12 19:03:51.468516] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233312 ] 00:07:45.596 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.596 [2024-07-12 19:03:51.529616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.596 [2024-07-12 19:03:51.597231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.596 [2024-07-12 19:03:51.597374] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.596 [2024-07-12 19:03:51.597530] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.596 [2024-07-12 19:03:51.597531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:45.596 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:45.597 19:03:51 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:46.982 00:07:46.982 real 0m1.296s 00:07:46.982 user 0m4.438s 00:07:46.982 sys 0m0.105s 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.982 19:03:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 ************************************ 00:07:46.982 END TEST accel_decomp_mcore 00:07:46.982 ************************************ 00:07:46.982 19:03:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:46.982 19:03:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.982 19:03:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:46.982 19:03:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:46.982 19:03:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:46.982 ************************************ 00:07:46.982 START TEST accel_decomp_full_mcore 00:07:46.982 ************************************ 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:46.982 19:03:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:46.982 [2024-07-12 19:03:52.840447] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:46.982 [2024-07-12 19:03:52.840543] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233569 ] 00:07:46.982 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.982 [2024-07-12 19:03:52.902811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:46.982 [2024-07-12 19:03:52.973858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:46.982 [2024-07-12 19:03:52.974043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:46.982 [2024-07-12 19:03:52.974198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:46.982 [2024-07-12 19:03:52.974354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:46.982 19:03:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.365 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.366 00:07:48.366 real 0m1.319s 00:07:48.366 user 0m4.501s 00:07:48.366 sys 0m0.113s 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:48.366 19:03:54 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:48.366 ************************************ 00:07:48.366 END TEST accel_decomp_full_mcore 00:07:48.366 ************************************ 00:07:48.366 19:03:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:48.366 19:03:54 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:48.366 19:03:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:48.366 19:03:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.366 19:03:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:48.366 ************************************ 00:07:48.366 START TEST accel_decomp_mthread 00:07:48.366 ************************************ 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:48.366 [2024-07-12 19:03:54.235618] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:48.366 [2024-07-12 19:03:54.235710] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233922 ] 00:07:48.366 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.366 [2024-07-12 19:03:54.296790] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.366 [2024-07-12 19:03:54.362669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:48.366 19:03:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.752 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.752 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:49.753 00:07:49.753 real 0m1.293s 00:07:49.753 user 0m1.211s 00:07:49.753 sys 0m0.095s 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.753 19:03:55 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:49.753 ************************************ 00:07:49.753 END TEST accel_decomp_mthread 00:07:49.753 ************************************ 00:07:49.753 19:03:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:49.753 19:03:55 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.753 19:03:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:49.753 19:03:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.753 19:03:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:49.753 ************************************ 00:07:49.753 START TEST accel_decomp_full_mthread 00:07:49.753 ************************************ 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:49.753 [2024-07-12 19:03:55.609495] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:49.753 [2024-07-12 19:03:55.609592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234277 ] 00:07:49.753 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.753 [2024-07-12 19:03:55.674367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.753 [2024-07-12 19:03:55.740365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:49.753 19:03:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.166 00:07:51.166 real 0m1.324s 00:07:51.166 user 0m1.237s 00:07:51.166 sys 0m0.099s 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.166 19:03:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:51.166 ************************************ 00:07:51.166 END TEST accel_decomp_full_mthread 00:07:51.166 ************************************ 00:07:51.166 19:03:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.166 19:03:56 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:51.166 19:03:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.166 19:03:56 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:51.166 19:03:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:51.166 19:03:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.166 19:03:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 19:03:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:51.167 19:03:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:51.167 19:03:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.167 19:03:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.167 19:03:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:51.167 19:03:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:51.167 19:03:56 accel -- accel/accel.sh@41 -- # jq -r . 00:07:51.167 ************************************ 00:07:51.167 START TEST accel_dif_functional_tests 00:07:51.167 ************************************ 00:07:51.167 19:03:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:51.167 [2024-07-12 19:03:57.027661] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:51.167 [2024-07-12 19:03:57.027714] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234627 ] 00:07:51.167 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.167 [2024-07-12 19:03:57.090268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:51.167 [2024-07-12 19:03:57.160554] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.167 [2024-07-12 19:03:57.160688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.167 [2024-07-12 19:03:57.160690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.167 00:07:51.167 00:07:51.167 CUnit - A unit testing framework for C - Version 2.1-3 00:07:51.167 http://cunit.sourceforge.net/ 00:07:51.167 00:07:51.167 00:07:51.167 Suite: accel_dif 00:07:51.167 Test: verify: DIF generated, GUARD check ...passed 00:07:51.167 Test: verify: DIF generated, APPTAG check ...passed 00:07:51.167 Test: verify: DIF generated, REFTAG check ...passed 00:07:51.167 Test: verify: DIF not generated, GUARD check ...[2024-07-12 19:03:57.216389] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:51.167 passed 00:07:51.167 Test: verify: DIF not generated, APPTAG check ...[2024-07-12 19:03:57.216432] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:51.167 passed 00:07:51.167 Test: verify: DIF not generated, REFTAG check ...[2024-07-12 19:03:57.216453] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:51.167 passed 00:07:51.167 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:51.167 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-12 19:03:57.216500] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:51.167 passed 00:07:51.167 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:51.167 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:51.167 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:51.167 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-12 19:03:57.216611] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:51.167 passed 00:07:51.167 Test: verify copy: DIF generated, GUARD check ...passed 00:07:51.167 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:51.167 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:51.167 Test: verify copy: DIF not generated, GUARD check ...[2024-07-12 19:03:57.216734] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:51.167 passed 00:07:51.167 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-12 19:03:57.216757] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:51.167 passed 00:07:51.167 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-12 19:03:57.216779] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:51.167 passed 00:07:51.167 Test: generate copy: DIF generated, GUARD check ...passed 00:07:51.167 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:51.167 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:51.167 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:51.167 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:51.167 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:51.167 Test: generate copy: iovecs-len validate ...[2024-07-12 19:03:57.216959] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:51.167 passed 00:07:51.167 Test: generate copy: buffer alignment validate ...passed 00:07:51.167 00:07:51.167 Run Summary: Type Total Ran Passed Failed Inactive 00:07:51.167 suites 1 1 n/a 0 0 00:07:51.167 tests 26 26 26 0 0 00:07:51.167 asserts 115 115 115 0 n/a 00:07:51.167 00:07:51.167 Elapsed time = 0.000 seconds 00:07:51.428 00:07:51.428 real 0m0.354s 00:07:51.428 user 0m0.487s 00:07:51.428 sys 0m0.129s 00:07:51.428 19:03:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.429 19:03:57 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:51.429 ************************************ 00:07:51.429 END TEST accel_dif_functional_tests 00:07:51.429 ************************************ 00:07:51.429 19:03:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:51.429 00:07:51.429 real 0m30.100s 00:07:51.429 user 0m33.734s 00:07:51.429 sys 0m4.123s 00:07:51.429 19:03:57 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.429 19:03:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:51.429 ************************************ 00:07:51.429 END TEST accel 00:07:51.429 ************************************ 00:07:51.429 19:03:57 -- common/autotest_common.sh@1142 -- # return 0 00:07:51.429 19:03:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:51.429 19:03:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:51.429 19:03:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.429 19:03:57 -- common/autotest_common.sh@10 -- # set +x 00:07:51.429 ************************************ 00:07:51.429 START TEST accel_rpc 00:07:51.429 ************************************ 00:07:51.429 19:03:57 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:51.429 * Looking for test storage... 00:07:51.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:51.429 19:03:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:51.429 19:03:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1234705 00:07:51.429 19:03:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1234705 00:07:51.429 19:03:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:51.429 19:03:57 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1234705 ']' 00:07:51.429 19:03:57 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.429 19:03:57 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.429 19:03:57 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.429 19:03:57 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.429 19:03:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.690 [2024-07-12 19:03:57.607926] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:51.690 [2024-07-12 19:03:57.607997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1234705 ] 00:07:51.690 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.690 [2024-07-12 19:03:57.673691] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.690 [2024-07-12 19:03:57.747074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.262 19:03:58 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:52.262 19:03:58 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:52.262 19:03:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:52.262 19:03:58 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:52.262 19:03:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:52.262 19:03:58 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:52.262 19:03:58 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:52.262 19:03:58 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:52.262 19:03:58 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.262 19:03:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:52.522 ************************************ 00:07:52.522 START TEST accel_assign_opcode 00:07:52.523 ************************************ 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 [2024-07-12 19:03:58.425071] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 [2024-07-12 19:03:58.437096] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.523 software 00:07:52.523 00:07:52.523 real 0m0.218s 00:07:52.523 user 0m0.053s 00:07:52.523 sys 0m0.008s 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:52.523 19:03:58 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:52.523 ************************************ 00:07:52.523 END TEST accel_assign_opcode 00:07:52.523 ************************************ 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:52.784 19:03:58 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1234705 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1234705 ']' 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1234705 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1234705 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1234705' 00:07:52.784 killing process with pid 1234705 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@967 -- # kill 1234705 00:07:52.784 19:03:58 accel_rpc -- common/autotest_common.sh@972 -- # wait 1234705 00:07:53.045 00:07:53.045 real 0m1.490s 00:07:53.045 user 0m1.572s 00:07:53.045 sys 0m0.415s 00:07:53.045 19:03:58 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.045 19:03:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.045 ************************************ 00:07:53.045 END TEST accel_rpc 00:07:53.045 ************************************ 00:07:53.045 19:03:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:53.045 19:03:58 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:53.045 19:03:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:53.045 19:03:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.045 19:03:58 -- common/autotest_common.sh@10 -- # set +x 00:07:53.045 ************************************ 00:07:53.045 START TEST app_cmdline 00:07:53.045 ************************************ 00:07:53.045 19:03:59 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:53.045 * Looking for test storage... 00:07:53.045 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:53.045 19:03:59 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:53.045 19:03:59 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1235108 00:07:53.045 19:03:59 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1235108 00:07:53.045 19:03:59 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:53.045 19:03:59 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1235108 ']' 00:07:53.045 19:03:59 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.045 19:03:59 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:53.045 19:03:59 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.045 19:03:59 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:53.045 19:03:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:53.045 [2024-07-12 19:03:59.174829] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:07:53.045 [2024-07-12 19:03:59.174894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1235108 ] 00:07:53.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.373 [2024-07-12 19:03:59.240153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.373 [2024-07-12 19:03:59.313448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.945 19:03:59 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:53.945 19:03:59 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:53.945 19:03:59 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:54.206 { 00:07:54.206 "version": "SPDK v24.09-pre git sha1 2945695e6", 00:07:54.206 "fields": { 00:07:54.206 "major": 24, 00:07:54.206 "minor": 9, 00:07:54.206 "patch": 0, 00:07:54.206 "suffix": "-pre", 00:07:54.206 "commit": "2945695e6" 00:07:54.206 } 00:07:54.206 } 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:54.206 request: 00:07:54.206 { 00:07:54.206 "method": "env_dpdk_get_mem_stats", 00:07:54.206 "req_id": 1 00:07:54.206 } 00:07:54.206 Got JSON-RPC error response 00:07:54.206 response: 00:07:54.206 { 00:07:54.206 "code": -32601, 00:07:54.206 "message": "Method not found" 00:07:54.206 } 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:54.206 19:04:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1235108 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1235108 ']' 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1235108 00:07:54.206 19:04:00 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:54.468 19:04:00 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:54.468 19:04:00 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1235108 00:07:54.468 19:04:00 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:54.468 19:04:00 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:54.468 19:04:00 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1235108' 00:07:54.468 killing process with pid 1235108 00:07:54.468 19:04:00 app_cmdline -- common/autotest_common.sh@967 -- # kill 1235108 00:07:54.468 19:04:00 app_cmdline -- common/autotest_common.sh@972 -- # wait 1235108 00:07:54.729 00:07:54.729 real 0m1.584s 00:07:54.729 user 0m1.902s 00:07:54.729 sys 0m0.420s 00:07:54.729 19:04:00 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.729 19:04:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:54.729 ************************************ 00:07:54.729 END TEST app_cmdline 00:07:54.729 ************************************ 00:07:54.729 19:04:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.729 19:04:00 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:54.729 19:04:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:54.729 19:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.729 19:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:54.729 ************************************ 00:07:54.729 START TEST version 00:07:54.729 ************************************ 00:07:54.729 19:04:00 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:54.729 * Looking for test storage... 00:07:54.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:54.729 19:04:00 version -- app/version.sh@17 -- # get_header_version major 00:07:54.729 19:04:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:54.729 19:04:00 version -- app/version.sh@14 -- # cut -f2 00:07:54.729 19:04:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.729 19:04:00 version -- app/version.sh@17 -- # major=24 00:07:54.729 19:04:00 version -- app/version.sh@18 -- # get_header_version minor 00:07:54.729 19:04:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:54.729 19:04:00 version -- app/version.sh@14 -- # cut -f2 00:07:54.729 19:04:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.729 19:04:00 version -- app/version.sh@18 -- # minor=9 00:07:54.729 19:04:00 version -- app/version.sh@19 -- # get_header_version patch 00:07:54.729 19:04:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:54.729 19:04:00 version -- app/version.sh@14 -- # cut -f2 00:07:54.729 19:04:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.729 19:04:00 version -- app/version.sh@19 -- # patch=0 00:07:54.729 19:04:00 version -- app/version.sh@20 -- # get_header_version suffix 00:07:54.729 19:04:00 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:54.729 19:04:00 version -- app/version.sh@14 -- # cut -f2 00:07:54.729 19:04:00 version -- app/version.sh@14 -- # tr -d '"' 00:07:54.729 19:04:00 version -- app/version.sh@20 -- # suffix=-pre 00:07:54.729 19:04:00 version -- app/version.sh@22 -- # version=24.9 00:07:54.729 19:04:00 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:54.729 19:04:00 version -- app/version.sh@28 -- # version=24.9rc0 00:07:54.729 19:04:00 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:54.729 19:04:00 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:54.729 19:04:00 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:54.729 19:04:00 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:54.729 00:07:54.729 real 0m0.171s 00:07:54.729 user 0m0.092s 00:07:54.729 sys 0m0.115s 00:07:54.729 19:04:00 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.729 19:04:00 version -- common/autotest_common.sh@10 -- # set +x 00:07:54.729 ************************************ 00:07:54.729 END TEST version 00:07:54.729 ************************************ 00:07:54.991 19:04:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:54.991 19:04:00 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:54.991 19:04:00 -- spdk/autotest.sh@198 -- # uname -s 00:07:54.991 19:04:00 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:54.991 19:04:00 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:54.991 19:04:00 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:54.991 19:04:00 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:54.991 19:04:00 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:54.991 19:04:00 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:54.991 19:04:00 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:54.991 19:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:54.991 19:04:00 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:54.991 19:04:00 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:54.991 19:04:00 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:54.991 19:04:00 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:54.991 19:04:00 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:54.991 19:04:00 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:54.991 19:04:00 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:54.991 19:04:00 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.991 19:04:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.991 19:04:00 -- common/autotest_common.sh@10 -- # set +x 00:07:54.991 ************************************ 00:07:54.991 START TEST nvmf_tcp 00:07:54.991 ************************************ 00:07:54.991 19:04:00 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:54.991 * Looking for test storage... 00:07:54.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:54.991 19:04:01 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:54.991 19:04:01 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:54.991 19:04:01 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:54.991 19:04:01 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.991 19:04:01 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.991 19:04:01 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.991 19:04:01 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:54.991 19:04:01 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:54.991 19:04:01 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:54.991 19:04:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:54.991 19:04:01 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:54.991 19:04:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:54.991 19:04:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.991 19:04:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.252 ************************************ 00:07:55.252 START TEST nvmf_example 00:07:55.252 ************************************ 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:55.252 * Looking for test storage... 00:07:55.252 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.252 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.253 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.253 19:04:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.253 19:04:01 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.392 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:03.393 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:03.393 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:03.393 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:03.393 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:03.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.582 ms 00:08:03.393 00:08:03.393 --- 10.0.0.2 ping statistics --- 00:08:03.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.393 rtt min/avg/max/mdev = 0.582/0.582/0.582/0.000 ms 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:08:03.393 00:08:03.393 --- 10.0.0.1 ping statistics --- 00:08:03.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.393 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1239220 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1239220 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1239220 ']' 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.393 19:04:08 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.393 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.393 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.393 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:08:03.393 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:03.393 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.393 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.393 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:03.394 19:04:09 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:03.394 EAL: No free 2048 kB hugepages reported on node 1 00:08:15.627 Initializing NVMe Controllers 00:08:15.627 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:15.627 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:15.627 Initialization complete. Launching workers. 00:08:15.627 ======================================================== 00:08:15.627 Latency(us) 00:08:15.627 Device Information : IOPS MiB/s Average min max 00:08:15.627 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17929.45 70.04 3569.14 827.71 20164.34 00:08:15.627 ======================================================== 00:08:15.627 Total : 17929.45 70.04 3569.14 827.71 20164.34 00:08:15.627 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:15.627 rmmod nvme_tcp 00:08:15.627 rmmod nvme_fabrics 00:08:15.627 rmmod nvme_keyring 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1239220 ']' 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1239220 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1239220 ']' 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1239220 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1239220 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1239220' 00:08:15.627 killing process with pid 1239220 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1239220 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1239220 00:08:15.627 nvmf threads initialize successfully 00:08:15.627 bdev subsystem init successfully 00:08:15.627 created a nvmf target service 00:08:15.627 create targets's poll groups done 00:08:15.627 all subsystems of target started 00:08:15.627 nvmf target is running 00:08:15.627 all subsystems of target stopped 00:08:15.627 destroy targets's poll groups done 00:08:15.627 destroyed the nvmf target service 00:08:15.627 bdev subsystem finish successfully 00:08:15.627 nvmf threads destroy successfully 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.627 19:04:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.888 19:04:21 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.888 19:04:21 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:15.888 19:04:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:15.888 19:04:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.888 00:08:15.888 real 0m20.842s 00:08:15.888 user 0m46.419s 00:08:15.888 sys 0m6.373s 00:08:15.888 19:04:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.888 19:04:21 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:08:15.888 ************************************ 00:08:15.888 END TEST nvmf_example 00:08:15.888 ************************************ 00:08:15.888 19:04:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:15.888 19:04:22 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:15.888 19:04:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.888 19:04:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.888 19:04:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.152 ************************************ 00:08:16.152 START TEST nvmf_filesystem 00:08:16.152 ************************************ 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:16.152 * Looking for test storage... 00:08:16.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:16.152 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:16.153 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:16.153 #define SPDK_CONFIG_H 00:08:16.153 #define SPDK_CONFIG_APPS 1 00:08:16.153 #define SPDK_CONFIG_ARCH native 00:08:16.153 #undef SPDK_CONFIG_ASAN 00:08:16.153 #undef SPDK_CONFIG_AVAHI 00:08:16.153 #undef SPDK_CONFIG_CET 00:08:16.153 #define SPDK_CONFIG_COVERAGE 1 00:08:16.153 #define SPDK_CONFIG_CROSS_PREFIX 00:08:16.153 #undef SPDK_CONFIG_CRYPTO 00:08:16.153 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:16.153 #undef SPDK_CONFIG_CUSTOMOCF 00:08:16.153 #undef SPDK_CONFIG_DAOS 00:08:16.153 #define SPDK_CONFIG_DAOS_DIR 00:08:16.153 #define SPDK_CONFIG_DEBUG 1 00:08:16.153 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:16.153 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:08:16.153 #define SPDK_CONFIG_DPDK_INC_DIR 00:08:16.153 #define SPDK_CONFIG_DPDK_LIB_DIR 00:08:16.153 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:16.153 #undef SPDK_CONFIG_DPDK_UADK 00:08:16.153 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:16.153 #define SPDK_CONFIG_EXAMPLES 1 00:08:16.153 #undef SPDK_CONFIG_FC 00:08:16.153 #define SPDK_CONFIG_FC_PATH 00:08:16.153 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:16.153 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:16.153 #undef SPDK_CONFIG_FUSE 00:08:16.153 #undef SPDK_CONFIG_FUZZER 00:08:16.153 #define SPDK_CONFIG_FUZZER_LIB 00:08:16.153 #undef SPDK_CONFIG_GOLANG 00:08:16.153 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:16.153 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:08:16.153 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:16.153 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:08:16.153 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:16.153 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:16.153 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:16.153 #define SPDK_CONFIG_IDXD 1 00:08:16.153 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:16.153 #undef SPDK_CONFIG_IPSEC_MB 00:08:16.153 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:16.153 #define SPDK_CONFIG_ISAL 1 00:08:16.153 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:16.153 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:16.153 #define SPDK_CONFIG_LIBDIR 00:08:16.153 #undef SPDK_CONFIG_LTO 00:08:16.153 #define SPDK_CONFIG_MAX_LCORES 128 00:08:16.153 #define SPDK_CONFIG_NVME_CUSE 1 00:08:16.153 #undef SPDK_CONFIG_OCF 00:08:16.153 #define SPDK_CONFIG_OCF_PATH 00:08:16.153 #define SPDK_CONFIG_OPENSSL_PATH 00:08:16.153 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:16.153 #define SPDK_CONFIG_PGO_DIR 00:08:16.154 #undef SPDK_CONFIG_PGO_USE 00:08:16.154 #define SPDK_CONFIG_PREFIX /usr/local 00:08:16.154 #undef SPDK_CONFIG_RAID5F 00:08:16.154 #undef SPDK_CONFIG_RBD 00:08:16.154 #define SPDK_CONFIG_RDMA 1 00:08:16.154 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:16.154 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:16.154 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:16.154 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:16.154 #define SPDK_CONFIG_SHARED 1 00:08:16.154 #undef SPDK_CONFIG_SMA 00:08:16.154 #define SPDK_CONFIG_TESTS 1 00:08:16.154 #undef SPDK_CONFIG_TSAN 00:08:16.154 #define SPDK_CONFIG_UBLK 1 00:08:16.154 #define SPDK_CONFIG_UBSAN 1 00:08:16.154 #undef SPDK_CONFIG_UNIT_TESTS 00:08:16.154 #undef SPDK_CONFIG_URING 00:08:16.154 #define SPDK_CONFIG_URING_PATH 00:08:16.154 #undef SPDK_CONFIG_URING_ZNS 00:08:16.154 #undef SPDK_CONFIG_USDT 00:08:16.154 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:16.154 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:16.154 #define SPDK_CONFIG_VFIO_USER 1 00:08:16.154 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:16.154 #define SPDK_CONFIG_VHOST 1 00:08:16.154 #define SPDK_CONFIG_VIRTIO 1 00:08:16.154 #undef SPDK_CONFIG_VTUNE 00:08:16.154 #define SPDK_CONFIG_VTUNE_DIR 00:08:16.154 #define SPDK_CONFIG_WERROR 1 00:08:16.154 #define SPDK_CONFIG_WPDK_DIR 00:08:16.154 #undef SPDK_CONFIG_XNVME 00:08:16.154 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:08:16.154 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:16.155 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1242070 ]] 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1242070 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.4CLNEA 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:08:16.156 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4CLNEA/tests/target /tmp/spdk.4CLNEA 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=954236928 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330192896 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=118700523520 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129371013120 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10670489600 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680796160 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864503296 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874202624 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9699328 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=216064 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=287744 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64684363776 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685506560 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1142784 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937097216 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937101312 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:08:16.419 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:08:16.420 * Looking for test storage... 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=118700523520 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=12885082112 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.420 19:04:22 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:24.566 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:24.566 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:24.566 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:24.566 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.566 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:24.567 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.567 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.579 ms 00:08:24.567 00:08:24.567 --- 10.0.0.2 ping statistics --- 00:08:24.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.567 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.567 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.567 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.338 ms 00:08:24.567 00:08:24.567 --- 10.0.0.1 ping statistics --- 00:08:24.567 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.567 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 ************************************ 00:08:24.567 START TEST nvmf_filesystem_no_in_capsule 00:08:24.567 ************************************ 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1245906 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1245906 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1245906 ']' 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:24.567 19:04:29 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 [2024-07-12 19:04:29.617993] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:08:24.567 [2024-07-12 19:04:29.618040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.567 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.567 [2024-07-12 19:04:29.682029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.567 [2024-07-12 19:04:29.749397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.567 [2024-07-12 19:04:29.749431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.567 [2024-07-12 19:04:29.749438] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.567 [2024-07-12 19:04:29.749445] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.567 [2024-07-12 19:04:29.749450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.567 [2024-07-12 19:04:29.749603] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.567 [2024-07-12 19:04:29.749717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.567 [2024-07-12 19:04:29.749872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.567 [2024-07-12 19:04:29.749873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 [2024-07-12 19:04:30.434798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 Malloc1 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 [2024-07-12 19:04:30.562547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:24.567 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:24.568 { 00:08:24.568 "name": "Malloc1", 00:08:24.568 "aliases": [ 00:08:24.568 "366ea4e1-00f7-4cec-a48f-b746423a5d1c" 00:08:24.568 ], 00:08:24.568 "product_name": "Malloc disk", 00:08:24.568 "block_size": 512, 00:08:24.568 "num_blocks": 1048576, 00:08:24.568 "uuid": "366ea4e1-00f7-4cec-a48f-b746423a5d1c", 00:08:24.568 "assigned_rate_limits": { 00:08:24.568 "rw_ios_per_sec": 0, 00:08:24.568 "rw_mbytes_per_sec": 0, 00:08:24.568 "r_mbytes_per_sec": 0, 00:08:24.568 "w_mbytes_per_sec": 0 00:08:24.568 }, 00:08:24.568 "claimed": true, 00:08:24.568 "claim_type": "exclusive_write", 00:08:24.568 "zoned": false, 00:08:24.568 "supported_io_types": { 00:08:24.568 "read": true, 00:08:24.568 "write": true, 00:08:24.568 "unmap": true, 00:08:24.568 "flush": true, 00:08:24.568 "reset": true, 00:08:24.568 "nvme_admin": false, 00:08:24.568 "nvme_io": false, 00:08:24.568 "nvme_io_md": false, 00:08:24.568 "write_zeroes": true, 00:08:24.568 "zcopy": true, 00:08:24.568 "get_zone_info": false, 00:08:24.568 "zone_management": false, 00:08:24.568 "zone_append": false, 00:08:24.568 "compare": false, 00:08:24.568 "compare_and_write": false, 00:08:24.568 "abort": true, 00:08:24.568 "seek_hole": false, 00:08:24.568 "seek_data": false, 00:08:24.568 "copy": true, 00:08:24.568 "nvme_iov_md": false 00:08:24.568 }, 00:08:24.568 "memory_domains": [ 00:08:24.568 { 00:08:24.568 "dma_device_id": "system", 00:08:24.568 "dma_device_type": 1 00:08:24.568 }, 00:08:24.568 { 00:08:24.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:24.568 "dma_device_type": 2 00:08:24.568 } 00:08:24.568 ], 00:08:24.568 "driver_specific": {} 00:08:24.568 } 00:08:24.568 ]' 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:24.568 19:04:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:26.480 19:04:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:26.480 19:04:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:26.480 19:04:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:26.480 19:04:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:26.480 19:04:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:28.389 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:28.389 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:28.390 19:04:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:29.330 19:04:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:30.271 ************************************ 00:08:30.271 START TEST filesystem_ext4 00:08:30.271 ************************************ 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:30.271 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:30.271 mke2fs 1.46.5 (30-Dec-2021) 00:08:30.271 Discarding device blocks: 0/522240 done 00:08:30.271 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:30.271 Filesystem UUID: 8d00f525-a1e0-4c3f-8ae9-d9d15acd87d5 00:08:30.271 Superblock backups stored on blocks: 00:08:30.271 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:30.271 00:08:30.271 Allocating group tables: 0/64 done 00:08:30.271 Writing inode tables: 0/64 done 00:08:30.532 Creating journal (8192 blocks): done 00:08:30.532 Writing superblocks and filesystem accounting information: 0/64 done 00:08:30.532 00:08:30.532 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:30.532 19:04:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1245906 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:31.472 00:08:31.472 real 0m1.136s 00:08:31.472 user 0m0.021s 00:08:31.472 sys 0m0.072s 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:31.472 ************************************ 00:08:31.472 END TEST filesystem_ext4 00:08:31.472 ************************************ 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:31.472 ************************************ 00:08:31.472 START TEST filesystem_btrfs 00:08:31.472 ************************************ 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:31.472 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:32.043 btrfs-progs v6.6.2 00:08:32.043 See https://btrfs.readthedocs.io for more information. 00:08:32.043 00:08:32.043 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:32.043 NOTE: several default settings have changed in version 5.15, please make sure 00:08:32.043 this does not affect your deployments: 00:08:32.043 - DUP for metadata (-m dup) 00:08:32.043 - enabled no-holes (-O no-holes) 00:08:32.043 - enabled free-space-tree (-R free-space-tree) 00:08:32.043 00:08:32.043 Label: (null) 00:08:32.043 UUID: bf6f6cd9-151d-40bc-93df-0c157778160b 00:08:32.043 Node size: 16384 00:08:32.043 Sector size: 4096 00:08:32.043 Filesystem size: 510.00MiB 00:08:32.043 Block group profiles: 00:08:32.043 Data: single 8.00MiB 00:08:32.043 Metadata: DUP 32.00MiB 00:08:32.043 System: DUP 8.00MiB 00:08:32.043 SSD detected: yes 00:08:32.043 Zoned device: no 00:08:32.043 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:32.043 Runtime features: free-space-tree 00:08:32.043 Checksum: crc32c 00:08:32.043 Number of devices: 1 00:08:32.043 Devices: 00:08:32.043 ID SIZE PATH 00:08:32.043 1 510.00MiB /dev/nvme0n1p1 00:08:32.043 00:08:32.043 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:32.043 19:04:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:32.303 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:32.303 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:08:32.303 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:32.303 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:08:32.303 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:32.303 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:32.303 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1245906 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:32.304 00:08:32.304 real 0m0.874s 00:08:32.304 user 0m0.028s 00:08:32.304 sys 0m0.138s 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:32.304 ************************************ 00:08:32.304 END TEST filesystem_btrfs 00:08:32.304 ************************************ 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.304 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:32.565 ************************************ 00:08:32.565 START TEST filesystem_xfs 00:08:32.565 ************************************ 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:32.565 19:04:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:32.565 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:32.565 = sectsz=512 attr=2, projid32bit=1 00:08:32.565 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:32.565 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:32.565 data = bsize=4096 blocks=130560, imaxpct=25 00:08:32.565 = sunit=0 swidth=0 blks 00:08:32.566 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:32.566 log =internal log bsize=4096 blocks=16384, version=2 00:08:32.566 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:32.566 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:33.506 Discarding blocks...Done. 00:08:33.507 19:04:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:33.507 19:04:39 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1245906 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:36.050 00:08:36.050 real 0m3.218s 00:08:36.050 user 0m0.027s 00:08:36.050 sys 0m0.075s 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:36.050 ************************************ 00:08:36.050 END TEST filesystem_xfs 00:08:36.050 ************************************ 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:36.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1245906 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1245906 ']' 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1245906 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1245906 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1245906' 00:08:36.050 killing process with pid 1245906 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1245906 00:08:36.050 19:04:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1245906 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:36.311 00:08:36.311 real 0m12.667s 00:08:36.311 user 0m49.932s 00:08:36.311 sys 0m1.207s 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.311 ************************************ 00:08:36.311 END TEST nvmf_filesystem_no_in_capsule 00:08:36.311 ************************************ 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:36.311 ************************************ 00:08:36.311 START TEST nvmf_filesystem_in_capsule 00:08:36.311 ************************************ 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1248533 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1248533 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1248533 ']' 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.311 19:04:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:36.311 [2024-07-12 19:04:42.359908] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:08:36.312 [2024-07-12 19:04:42.359953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.312 EAL: No free 2048 kB hugepages reported on node 1 00:08:36.312 [2024-07-12 19:04:42.424766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.573 [2024-07-12 19:04:42.489873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.573 [2024-07-12 19:04:42.489911] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.573 [2024-07-12 19:04:42.489918] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.573 [2024-07-12 19:04:42.489925] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.573 [2024-07-12 19:04:42.489930] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.573 [2024-07-12 19:04:42.490077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.573 [2024-07-12 19:04:42.490208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.573 [2024-07-12 19:04:42.490538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.573 [2024-07-12 19:04:42.490539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.144 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.144 [2024-07-12 19:04:43.174802] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.145 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.145 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:37.145 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.145 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.145 Malloc1 00:08:37.145 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.145 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:37.145 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.145 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.406 [2024-07-12 19:04:43.304547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:37.406 { 00:08:37.406 "name": "Malloc1", 00:08:37.406 "aliases": [ 00:08:37.406 "55d0742a-eb34-4dbc-9602-8df72d6f3451" 00:08:37.406 ], 00:08:37.406 "product_name": "Malloc disk", 00:08:37.406 "block_size": 512, 00:08:37.406 "num_blocks": 1048576, 00:08:37.406 "uuid": "55d0742a-eb34-4dbc-9602-8df72d6f3451", 00:08:37.406 "assigned_rate_limits": { 00:08:37.406 "rw_ios_per_sec": 0, 00:08:37.406 "rw_mbytes_per_sec": 0, 00:08:37.406 "r_mbytes_per_sec": 0, 00:08:37.406 "w_mbytes_per_sec": 0 00:08:37.406 }, 00:08:37.406 "claimed": true, 00:08:37.406 "claim_type": "exclusive_write", 00:08:37.406 "zoned": false, 00:08:37.406 "supported_io_types": { 00:08:37.406 "read": true, 00:08:37.406 "write": true, 00:08:37.406 "unmap": true, 00:08:37.406 "flush": true, 00:08:37.406 "reset": true, 00:08:37.406 "nvme_admin": false, 00:08:37.406 "nvme_io": false, 00:08:37.406 "nvme_io_md": false, 00:08:37.406 "write_zeroes": true, 00:08:37.406 "zcopy": true, 00:08:37.406 "get_zone_info": false, 00:08:37.406 "zone_management": false, 00:08:37.406 "zone_append": false, 00:08:37.406 "compare": false, 00:08:37.406 "compare_and_write": false, 00:08:37.406 "abort": true, 00:08:37.406 "seek_hole": false, 00:08:37.406 "seek_data": false, 00:08:37.406 "copy": true, 00:08:37.406 "nvme_iov_md": false 00:08:37.406 }, 00:08:37.406 "memory_domains": [ 00:08:37.406 { 00:08:37.406 "dma_device_id": "system", 00:08:37.406 "dma_device_type": 1 00:08:37.406 }, 00:08:37.406 { 00:08:37.406 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:37.406 "dma_device_type": 2 00:08:37.406 } 00:08:37.406 ], 00:08:37.406 "driver_specific": {} 00:08:37.406 } 00:08:37.406 ]' 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:37.406 19:04:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:39.319 19:04:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:39.319 19:04:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:39.319 19:04:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:39.319 19:04:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:39.319 19:04:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:41.234 19:04:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:41.234 19:04:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:41.234 19:04:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:41.234 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:41.806 19:04:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:43.192 ************************************ 00:08:43.192 START TEST filesystem_in_capsule_ext4 00:08:43.192 ************************************ 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:43.192 19:04:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:43.192 mke2fs 1.46.5 (30-Dec-2021) 00:08:43.192 Discarding device blocks: 0/522240 done 00:08:43.192 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:43.192 Filesystem UUID: 2c4e60c8-4f64-4bc4-ba7a-9da54a612fdd 00:08:43.192 Superblock backups stored on blocks: 00:08:43.192 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:43.192 00:08:43.192 Allocating group tables: 0/64 done 00:08:43.192 Writing inode tables: 0/64 1/64 done 00:08:43.192 Creating journal (8192 blocks): done 00:08:43.192 Writing superblocks and filesystem accounting information: 0/64 done 00:08:43.192 00:08:43.192 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:43.192 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1248533 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:43.764 00:08:43.764 real 0m0.935s 00:08:43.764 user 0m0.029s 00:08:43.764 sys 0m0.065s 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:43.764 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:43.764 ************************************ 00:08:43.764 END TEST filesystem_in_capsule_ext4 00:08:43.764 ************************************ 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.025 ************************************ 00:08:44.025 START TEST filesystem_in_capsule_btrfs 00:08:44.025 ************************************ 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:44.025 19:04:49 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:44.286 btrfs-progs v6.6.2 00:08:44.286 See https://btrfs.readthedocs.io for more information. 00:08:44.286 00:08:44.286 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:44.286 NOTE: several default settings have changed in version 5.15, please make sure 00:08:44.286 this does not affect your deployments: 00:08:44.286 - DUP for metadata (-m dup) 00:08:44.286 - enabled no-holes (-O no-holes) 00:08:44.286 - enabled free-space-tree (-R free-space-tree) 00:08:44.286 00:08:44.286 Label: (null) 00:08:44.286 UUID: 0591be54-75ec-4ac6-a011-62cba0e1ddd5 00:08:44.286 Node size: 16384 00:08:44.286 Sector size: 4096 00:08:44.286 Filesystem size: 510.00MiB 00:08:44.286 Block group profiles: 00:08:44.286 Data: single 8.00MiB 00:08:44.286 Metadata: DUP 32.00MiB 00:08:44.286 System: DUP 8.00MiB 00:08:44.286 SSD detected: yes 00:08:44.286 Zoned device: no 00:08:44.286 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:44.286 Runtime features: free-space-tree 00:08:44.286 Checksum: crc32c 00:08:44.286 Number of devices: 1 00:08:44.286 Devices: 00:08:44.286 ID SIZE PATH 00:08:44.286 1 510.00MiB /dev/nvme0n1p1 00:08:44.286 00:08:44.286 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:44.287 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:44.287 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:44.287 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:44.287 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:44.287 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:44.287 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:44.287 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1248533 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:44.548 00:08:44.548 real 0m0.490s 00:08:44.548 user 0m0.035s 00:08:44.548 sys 0m0.125s 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:44.548 ************************************ 00:08:44.548 END TEST filesystem_in_capsule_btrfs 00:08:44.548 ************************************ 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:44.548 ************************************ 00:08:44.548 START TEST filesystem_in_capsule_xfs 00:08:44.548 ************************************ 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:44.548 19:04:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:44.548 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:44.548 = sectsz=512 attr=2, projid32bit=1 00:08:44.548 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:44.548 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:44.548 data = bsize=4096 blocks=130560, imaxpct=25 00:08:44.548 = sunit=0 swidth=0 blks 00:08:44.548 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:44.548 log =internal log bsize=4096 blocks=16384, version=2 00:08:44.548 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:44.548 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:45.491 Discarding blocks...Done. 00:08:45.491 19:04:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:45.491 19:04:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1248533 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:48.035 00:08:48.035 real 0m3.171s 00:08:48.035 user 0m0.026s 00:08:48.035 sys 0m0.077s 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:48.035 ************************************ 00:08:48.035 END TEST filesystem_in_capsule_xfs 00:08:48.035 ************************************ 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:48.035 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:48.036 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:48.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.036 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:48.036 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:48.036 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:48.036 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.036 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:48.036 19:04:53 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1248533 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1248533 ']' 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1248533 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1248533 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1248533' 00:08:48.036 killing process with pid 1248533 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1248533 00:08:48.036 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1248533 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:48.297 00:08:48.297 real 0m12.009s 00:08:48.297 user 0m47.319s 00:08:48.297 sys 0m1.192s 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:48.297 ************************************ 00:08:48.297 END TEST nvmf_filesystem_in_capsule 00:08:48.297 ************************************ 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:48.297 rmmod nvme_tcp 00:08:48.297 rmmod nvme_fabrics 00:08:48.297 rmmod nvme_keyring 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:48.297 19:04:54 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.867 19:04:56 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:50.867 00:08:50.867 real 0m34.436s 00:08:50.867 user 1m39.431s 00:08:50.867 sys 0m7.919s 00:08:50.867 19:04:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:50.867 19:04:56 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 ************************************ 00:08:50.867 END TEST nvmf_filesystem 00:08:50.867 ************************************ 00:08:50.867 19:04:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:50.867 19:04:56 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:50.867 19:04:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:50.867 19:04:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.867 19:04:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:50.867 ************************************ 00:08:50.867 START TEST nvmf_target_discovery 00:08:50.867 ************************************ 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:50.867 * Looking for test storage... 00:08:50.867 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:50.867 19:04:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:57.460 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:08:57.461 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:08:57.461 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:08:57.461 Found net devices under 0000:4b:00.0: cvl_0_0 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:08:57.461 Found net devices under 0000:4b:00.1: cvl_0_1 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.461 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:57.722 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.722 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.521 ms 00:08:57.722 00:08:57.722 --- 10.0.0.2 ping statistics --- 00:08:57.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.722 rtt min/avg/max/mdev = 0.521/0.521/0.521/0.000 ms 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.722 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.722 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:08:57.722 00:08:57.722 --- 10.0.0.1 ping statistics --- 00:08:57.722 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.722 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:57.722 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:57.723 19:05:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:57.723 19:05:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1255353 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1255353 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1255353 ']' 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:57.983 19:05:03 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:57.983 [2024-07-12 19:05:03.909832] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:08:57.984 [2024-07-12 19:05:03.909896] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.984 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.984 [2024-07-12 19:05:03.980193] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.984 [2024-07-12 19:05:04.055967] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.984 [2024-07-12 19:05:04.056004] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.984 [2024-07-12 19:05:04.056011] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:57.984 [2024-07-12 19:05:04.056018] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:57.984 [2024-07-12 19:05:04.056023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.984 [2024-07-12 19:05:04.056153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.984 [2024-07-12 19:05:04.056245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.984 [2024-07-12 19:05:04.056396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.984 [2024-07-12 19:05:04.056397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 [2024-07-12 19:05:04.737767] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 Null1 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 [2024-07-12 19:05:04.798081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 Null2 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 Null3 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.949 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.949 Null4 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:58.950 19:05:04 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:08:59.232 00:08:59.232 Discovery Log Number of Records 6, Generation counter 6 00:08:59.232 =====Discovery Log Entry 0====== 00:08:59.232 trtype: tcp 00:08:59.232 adrfam: ipv4 00:08:59.232 subtype: current discovery subsystem 00:08:59.232 treq: not required 00:08:59.232 portid: 0 00:08:59.232 trsvcid: 4420 00:08:59.232 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:59.232 traddr: 10.0.0.2 00:08:59.232 eflags: explicit discovery connections, duplicate discovery information 00:08:59.232 sectype: none 00:08:59.232 =====Discovery Log Entry 1====== 00:08:59.232 trtype: tcp 00:08:59.232 adrfam: ipv4 00:08:59.232 subtype: nvme subsystem 00:08:59.232 treq: not required 00:08:59.232 portid: 0 00:08:59.232 trsvcid: 4420 00:08:59.232 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:59.232 traddr: 10.0.0.2 00:08:59.232 eflags: none 00:08:59.232 sectype: none 00:08:59.232 =====Discovery Log Entry 2====== 00:08:59.232 trtype: tcp 00:08:59.232 adrfam: ipv4 00:08:59.232 subtype: nvme subsystem 00:08:59.232 treq: not required 00:08:59.232 portid: 0 00:08:59.232 trsvcid: 4420 00:08:59.232 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:59.232 traddr: 10.0.0.2 00:08:59.232 eflags: none 00:08:59.232 sectype: none 00:08:59.232 =====Discovery Log Entry 3====== 00:08:59.232 trtype: tcp 00:08:59.232 adrfam: ipv4 00:08:59.232 subtype: nvme subsystem 00:08:59.232 treq: not required 00:08:59.232 portid: 0 00:08:59.232 trsvcid: 4420 00:08:59.232 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:59.232 traddr: 10.0.0.2 00:08:59.232 eflags: none 00:08:59.232 sectype: none 00:08:59.232 =====Discovery Log Entry 4====== 00:08:59.232 trtype: tcp 00:08:59.232 adrfam: ipv4 00:08:59.232 subtype: nvme subsystem 00:08:59.232 treq: not required 00:08:59.232 portid: 0 00:08:59.232 trsvcid: 4420 00:08:59.232 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:59.232 traddr: 10.0.0.2 00:08:59.232 eflags: none 00:08:59.232 sectype: none 00:08:59.232 =====Discovery Log Entry 5====== 00:08:59.232 trtype: tcp 00:08:59.232 adrfam: ipv4 00:08:59.232 subtype: discovery subsystem referral 00:08:59.232 treq: not required 00:08:59.232 portid: 0 00:08:59.232 trsvcid: 4430 00:08:59.232 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:59.232 traddr: 10.0.0.2 00:08:59.232 eflags: none 00:08:59.232 sectype: none 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:59.232 Perform nvmf subsystem discovery via RPC 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 [ 00:08:59.232 { 00:08:59.232 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:59.232 "subtype": "Discovery", 00:08:59.232 "listen_addresses": [ 00:08:59.232 { 00:08:59.232 "trtype": "TCP", 00:08:59.232 "adrfam": "IPv4", 00:08:59.232 "traddr": "10.0.0.2", 00:08:59.232 "trsvcid": "4420" 00:08:59.232 } 00:08:59.232 ], 00:08:59.232 "allow_any_host": true, 00:08:59.232 "hosts": [] 00:08:59.232 }, 00:08:59.232 { 00:08:59.232 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:59.232 "subtype": "NVMe", 00:08:59.232 "listen_addresses": [ 00:08:59.232 { 00:08:59.232 "trtype": "TCP", 00:08:59.232 "adrfam": "IPv4", 00:08:59.232 "traddr": "10.0.0.2", 00:08:59.232 "trsvcid": "4420" 00:08:59.232 } 00:08:59.232 ], 00:08:59.232 "allow_any_host": true, 00:08:59.232 "hosts": [], 00:08:59.232 "serial_number": "SPDK00000000000001", 00:08:59.232 "model_number": "SPDK bdev Controller", 00:08:59.232 "max_namespaces": 32, 00:08:59.232 "min_cntlid": 1, 00:08:59.232 "max_cntlid": 65519, 00:08:59.232 "namespaces": [ 00:08:59.232 { 00:08:59.232 "nsid": 1, 00:08:59.232 "bdev_name": "Null1", 00:08:59.232 "name": "Null1", 00:08:59.232 "nguid": "679E4199A2A8454CB3B4222EF67880F4", 00:08:59.232 "uuid": "679e4199-a2a8-454c-b3b4-222ef67880f4" 00:08:59.232 } 00:08:59.232 ] 00:08:59.232 }, 00:08:59.232 { 00:08:59.232 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:59.232 "subtype": "NVMe", 00:08:59.232 "listen_addresses": [ 00:08:59.232 { 00:08:59.232 "trtype": "TCP", 00:08:59.232 "adrfam": "IPv4", 00:08:59.232 "traddr": "10.0.0.2", 00:08:59.232 "trsvcid": "4420" 00:08:59.232 } 00:08:59.232 ], 00:08:59.232 "allow_any_host": true, 00:08:59.232 "hosts": [], 00:08:59.232 "serial_number": "SPDK00000000000002", 00:08:59.232 "model_number": "SPDK bdev Controller", 00:08:59.232 "max_namespaces": 32, 00:08:59.232 "min_cntlid": 1, 00:08:59.232 "max_cntlid": 65519, 00:08:59.232 "namespaces": [ 00:08:59.232 { 00:08:59.232 "nsid": 1, 00:08:59.232 "bdev_name": "Null2", 00:08:59.232 "name": "Null2", 00:08:59.232 "nguid": "59AED188C3E04069A5CE200F096C8AD8", 00:08:59.232 "uuid": "59aed188-c3e0-4069-a5ce-200f096c8ad8" 00:08:59.232 } 00:08:59.232 ] 00:08:59.232 }, 00:08:59.232 { 00:08:59.232 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:59.232 "subtype": "NVMe", 00:08:59.232 "listen_addresses": [ 00:08:59.232 { 00:08:59.232 "trtype": "TCP", 00:08:59.232 "adrfam": "IPv4", 00:08:59.232 "traddr": "10.0.0.2", 00:08:59.232 "trsvcid": "4420" 00:08:59.232 } 00:08:59.232 ], 00:08:59.232 "allow_any_host": true, 00:08:59.232 "hosts": [], 00:08:59.232 "serial_number": "SPDK00000000000003", 00:08:59.232 "model_number": "SPDK bdev Controller", 00:08:59.232 "max_namespaces": 32, 00:08:59.232 "min_cntlid": 1, 00:08:59.232 "max_cntlid": 65519, 00:08:59.232 "namespaces": [ 00:08:59.232 { 00:08:59.232 "nsid": 1, 00:08:59.232 "bdev_name": "Null3", 00:08:59.232 "name": "Null3", 00:08:59.232 "nguid": "D812CC641A794E11B82A7F086248A746", 00:08:59.232 "uuid": "d812cc64-1a79-4e11-b82a-7f086248a746" 00:08:59.232 } 00:08:59.232 ] 00:08:59.232 }, 00:08:59.232 { 00:08:59.232 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:59.232 "subtype": "NVMe", 00:08:59.232 "listen_addresses": [ 00:08:59.232 { 00:08:59.232 "trtype": "TCP", 00:08:59.232 "adrfam": "IPv4", 00:08:59.232 "traddr": "10.0.0.2", 00:08:59.232 "trsvcid": "4420" 00:08:59.232 } 00:08:59.232 ], 00:08:59.232 "allow_any_host": true, 00:08:59.232 "hosts": [], 00:08:59.232 "serial_number": "SPDK00000000000004", 00:08:59.232 "model_number": "SPDK bdev Controller", 00:08:59.232 "max_namespaces": 32, 00:08:59.232 "min_cntlid": 1, 00:08:59.232 "max_cntlid": 65519, 00:08:59.232 "namespaces": [ 00:08:59.232 { 00:08:59.232 "nsid": 1, 00:08:59.232 "bdev_name": "Null4", 00:08:59.232 "name": "Null4", 00:08:59.232 "nguid": "A5424D73959845D6A9C0CE24A6E34EFA", 00:08:59.232 "uuid": "a5424d73-9598-45d6-a9c0-ce24a6e34efa" 00:08:59.232 } 00:08:59.232 ] 00:08:59.232 } 00:08:59.232 ] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:59.232 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:59.233 rmmod nvme_tcp 00:08:59.233 rmmod nvme_fabrics 00:08:59.233 rmmod nvme_keyring 00:08:59.233 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1255353 ']' 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1255353 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1255353 ']' 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1255353 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1255353 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1255353' 00:08:59.492 killing process with pid 1255353 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1255353 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1255353 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:59.492 19:05:05 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.035 19:05:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:02.035 00:09:02.035 real 0m11.058s 00:09:02.035 user 0m8.162s 00:09:02.035 sys 0m5.700s 00:09:02.035 19:05:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:02.035 19:05:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 ************************************ 00:09:02.035 END TEST nvmf_target_discovery 00:09:02.035 ************************************ 00:09:02.035 19:05:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:02.035 19:05:07 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:02.035 19:05:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:02.035 19:05:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.035 19:05:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:02.035 ************************************ 00:09:02.035 START TEST nvmf_referrals 00:09:02.035 ************************************ 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:09:02.035 * Looking for test storage... 00:09:02.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.035 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:09:02.036 19:05:07 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:08.625 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:08.625 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:08.625 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:08.626 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:08.626 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.626 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.887 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.887 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:08.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:09:08.888 00:09:08.888 --- 10.0.0.2 ping statistics --- 00:09:08.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.888 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:09:08.888 00:09:08.888 --- 10.0.0.1 ping statistics --- 00:09:08.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.888 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1259799 00:09:08.888 19:05:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1259799 00:09:08.888 19:05:14 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:08.888 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1259799 ']' 00:09:08.888 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.888 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:08.888 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.888 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:08.888 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.149 [2024-07-12 19:05:15.055984] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:09:09.149 [2024-07-12 19:05:15.056048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.149 EAL: No free 2048 kB hugepages reported on node 1 00:09:09.149 [2024-07-12 19:05:15.129427] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.149 [2024-07-12 19:05:15.204741] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.150 [2024-07-12 19:05:15.204783] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.150 [2024-07-12 19:05:15.204791] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.150 [2024-07-12 19:05:15.204798] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.150 [2024-07-12 19:05:15.204803] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.150 [2024-07-12 19:05:15.204941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.150 [2024-07-12 19:05:15.205055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.150 [2024-07-12 19:05:15.205213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.150 [2024-07-12 19:05:15.205213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.721 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:09.721 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:09:09.721 19:05:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:09.721 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:09.721 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 [2024-07-12 19:05:15.878756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 [2024-07-12 19:05:15.890947] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:09.983 19:05:15 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:09.983 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:09.983 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:09.983 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:09.983 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:09.983 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:09.983 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:09.983 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:09.983 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:10.244 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:10.245 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:10.245 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:10.245 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:10.506 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:10.767 19:05:16 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:11.027 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.288 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 8009 -o json 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:11.548 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:11.548 rmmod nvme_tcp 00:09:11.548 rmmod nvme_fabrics 00:09:11.810 rmmod nvme_keyring 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1259799 ']' 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1259799 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1259799 ']' 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1259799 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1259799 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1259799' 00:09:11.810 killing process with pid 1259799 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1259799 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1259799 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:11.810 19:05:17 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.357 19:05:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:14.357 00:09:14.357 real 0m12.272s 00:09:14.357 user 0m13.736s 00:09:14.357 sys 0m5.965s 00:09:14.357 19:05:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:14.357 19:05:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:09:14.357 ************************************ 00:09:14.357 END TEST nvmf_referrals 00:09:14.357 ************************************ 00:09:14.357 19:05:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:14.357 19:05:20 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:14.357 19:05:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:14.357 19:05:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:14.358 19:05:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:14.358 ************************************ 00:09:14.358 START TEST nvmf_connect_disconnect 00:09:14.358 ************************************ 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:14.358 * Looking for test storage... 00:09:14.358 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:09:14.358 19:05:20 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:20.952 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:20.953 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:20.953 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:20.953 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:20.953 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:20.953 19:05:26 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:20.953 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:20.953 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:21.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:21.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.435 ms 00:09:21.214 00:09:21.214 --- 10.0.0.2 ping statistics --- 00:09:21.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.214 rtt min/avg/max/mdev = 0.435/0.435/0.435/0.000 ms 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:21.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:21.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:09:21.214 00:09:21.214 --- 10.0.0.1 ping statistics --- 00:09:21.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:21.214 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:21.214 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:21.215 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:21.215 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:21.215 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:21.215 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:21.215 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:21.215 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:21.215 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:21.215 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1264571 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1264571 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1264571 ']' 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:21.475 19:05:27 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:21.475 [2024-07-12 19:05:27.405055] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:09:21.475 [2024-07-12 19:05:27.405119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:21.475 EAL: No free 2048 kB hugepages reported on node 1 00:09:21.475 [2024-07-12 19:05:27.475450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:21.475 [2024-07-12 19:05:27.550893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:21.475 [2024-07-12 19:05:27.550929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:21.475 [2024-07-12 19:05:27.550936] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:21.475 [2024-07-12 19:05:27.550943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:21.475 [2024-07-12 19:05:27.550949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:21.475 [2024-07-12 19:05:27.551088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.475 [2024-07-12 19:05:27.551208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:21.475 [2024-07-12 19:05:27.551524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:21.475 [2024-07-12 19:05:27.551526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.418 [2024-07-12 19:05:28.234787] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:22.418 [2024-07-12 19:05:28.294193] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:09:22.418 19:05:28 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:09:26.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.322 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:40.828 rmmod nvme_tcp 00:09:40.828 rmmod nvme_fabrics 00:09:40.828 rmmod nvme_keyring 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1264571 ']' 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1264571 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1264571 ']' 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1264571 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1264571 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1264571' 00:09:40.828 killing process with pid 1264571 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1264571 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1264571 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:40.828 19:05:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.376 19:05:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:43.376 00:09:43.376 real 0m28.968s 00:09:43.376 user 1m19.305s 00:09:43.376 sys 0m6.563s 00:09:43.376 19:05:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:43.376 19:05:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:43.376 ************************************ 00:09:43.376 END TEST nvmf_connect_disconnect 00:09:43.376 ************************************ 00:09:43.376 19:05:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:43.376 19:05:49 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:43.376 19:05:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:43.376 19:05:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:43.376 19:05:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:43.376 ************************************ 00:09:43.376 START TEST nvmf_multitarget 00:09:43.376 ************************************ 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:43.376 * Looking for test storage... 00:09:43.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:43.376 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:43.377 19:05:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:43.377 19:05:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:43.377 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:43.377 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:43.377 19:05:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:43.377 19:05:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:09:49.967 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:09:49.967 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:49.967 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:09:49.968 Found net devices under 0000:4b:00.0: cvl_0_0 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:09:49.968 Found net devices under 0000:4b:00.1: cvl_0_1 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:49.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:49.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.455 ms 00:09:49.968 00:09:49.968 --- 10.0.0.2 ping statistics --- 00:09:49.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.968 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:49.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:49.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:09:49.968 00:09:49.968 --- 10.0.0.1 ping statistics --- 00:09:49.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:49.968 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:49.968 19:05:55 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1272696 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1272696 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1272696 ']' 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:49.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:49.968 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:49.968 [2024-07-12 19:05:56.076047] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:09:49.968 [2024-07-12 19:05:56.076094] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:50.229 EAL: No free 2048 kB hugepages reported on node 1 00:09:50.229 [2024-07-12 19:05:56.144628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:50.229 [2024-07-12 19:05:56.209560] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:50.229 [2024-07-12 19:05:56.209596] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:50.229 [2024-07-12 19:05:56.209604] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:50.229 [2024-07-12 19:05:56.209610] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:50.229 [2024-07-12 19:05:56.209619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:50.229 [2024-07-12 19:05:56.209773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:50.229 [2024-07-12 19:05:56.209890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:50.229 [2024-07-12 19:05:56.210052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.229 [2024-07-12 19:05:56.210052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:50.799 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:50.799 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:50.799 19:05:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:50.800 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:50.800 19:05:56 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:51.059 19:05:56 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:51.059 19:05:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:51.059 19:05:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:51.059 19:05:56 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:51.059 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:51.059 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:51.059 "nvmf_tgt_1" 00:09:51.059 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:51.320 "nvmf_tgt_2" 00:09:51.320 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:51.320 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:51.320 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:51.320 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:51.320 true 00:09:51.320 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:51.581 true 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:51.581 rmmod nvme_tcp 00:09:51.581 rmmod nvme_fabrics 00:09:51.581 rmmod nvme_keyring 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1272696 ']' 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1272696 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1272696 ']' 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1272696 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:51.581 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1272696 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1272696' 00:09:51.842 killing process with pid 1272696 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1272696 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1272696 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:51.842 19:05:57 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.388 19:05:59 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:54.388 00:09:54.388 real 0m10.850s 00:09:54.388 user 0m9.145s 00:09:54.388 sys 0m5.599s 00:09:54.388 19:05:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:54.388 19:05:59 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:54.388 ************************************ 00:09:54.388 END TEST nvmf_multitarget 00:09:54.388 ************************************ 00:09:54.388 19:05:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:54.388 19:05:59 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:54.388 19:05:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:54.388 19:05:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:54.388 19:05:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:54.388 ************************************ 00:09:54.388 START TEST nvmf_rpc 00:09:54.388 ************************************ 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:54.388 * Looking for test storage... 00:09:54.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:54.388 19:06:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:00.973 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:00.974 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:00.974 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:00.974 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:00.974 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:00.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:00.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.545 ms 00:10:00.974 00:10:00.974 --- 10.0.0.2 ping statistics --- 00:10:00.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.974 rtt min/avg/max/mdev = 0.545/0.545/0.545/0.000 ms 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:00.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:00.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:10:00.974 00:10:00.974 --- 10.0.0.1 ping statistics --- 00:10:00.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:00.974 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:00.974 19:06:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1277165 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1277165 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1277165 ']' 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:00.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:00.974 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:00.975 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.975 [2024-07-12 19:06:07.075880] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:10:00.975 [2024-07-12 19:06:07.075943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.235 EAL: No free 2048 kB hugepages reported on node 1 00:10:01.235 [2024-07-12 19:06:07.148856] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.235 [2024-07-12 19:06:07.223938] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.235 [2024-07-12 19:06:07.223977] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.235 [2024-07-12 19:06:07.223984] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.235 [2024-07-12 19:06:07.223994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.235 [2024-07-12 19:06:07.224000] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.235 [2024-07-12 19:06:07.224164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.235 [2024-07-12 19:06:07.224261] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.235 [2024-07-12 19:06:07.224420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.235 [2024-07-12 19:06:07.224421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:01.805 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:01.805 "tick_rate": 2400000000, 00:10:01.805 "poll_groups": [ 00:10:01.805 { 00:10:01.805 "name": "nvmf_tgt_poll_group_000", 00:10:01.805 "admin_qpairs": 0, 00:10:01.805 "io_qpairs": 0, 00:10:01.805 "current_admin_qpairs": 0, 00:10:01.805 "current_io_qpairs": 0, 00:10:01.805 "pending_bdev_io": 0, 00:10:01.805 "completed_nvme_io": 0, 00:10:01.805 "transports": [] 00:10:01.805 }, 00:10:01.805 { 00:10:01.805 "name": "nvmf_tgt_poll_group_001", 00:10:01.805 "admin_qpairs": 0, 00:10:01.805 "io_qpairs": 0, 00:10:01.805 "current_admin_qpairs": 0, 00:10:01.805 "current_io_qpairs": 0, 00:10:01.805 "pending_bdev_io": 0, 00:10:01.805 "completed_nvme_io": 0, 00:10:01.805 "transports": [] 00:10:01.805 }, 00:10:01.805 { 00:10:01.805 "name": "nvmf_tgt_poll_group_002", 00:10:01.805 "admin_qpairs": 0, 00:10:01.805 "io_qpairs": 0, 00:10:01.805 "current_admin_qpairs": 0, 00:10:01.806 "current_io_qpairs": 0, 00:10:01.806 "pending_bdev_io": 0, 00:10:01.806 "completed_nvme_io": 0, 00:10:01.806 "transports": [] 00:10:01.806 }, 00:10:01.806 { 00:10:01.806 "name": "nvmf_tgt_poll_group_003", 00:10:01.806 "admin_qpairs": 0, 00:10:01.806 "io_qpairs": 0, 00:10:01.806 "current_admin_qpairs": 0, 00:10:01.806 "current_io_qpairs": 0, 00:10:01.806 "pending_bdev_io": 0, 00:10:01.806 "completed_nvme_io": 0, 00:10:01.806 "transports": [] 00:10:01.806 } 00:10:01.806 ] 00:10:01.806 }' 00:10:01.806 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:01.806 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:01.806 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:01.806 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:02.066 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:02.066 19:06:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.066 [2024-07-12 19:06:08.026139] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:02.066 "tick_rate": 2400000000, 00:10:02.066 "poll_groups": [ 00:10:02.066 { 00:10:02.066 "name": "nvmf_tgt_poll_group_000", 00:10:02.066 "admin_qpairs": 0, 00:10:02.066 "io_qpairs": 0, 00:10:02.066 "current_admin_qpairs": 0, 00:10:02.066 "current_io_qpairs": 0, 00:10:02.066 "pending_bdev_io": 0, 00:10:02.066 "completed_nvme_io": 0, 00:10:02.066 "transports": [ 00:10:02.066 { 00:10:02.066 "trtype": "TCP" 00:10:02.066 } 00:10:02.066 ] 00:10:02.066 }, 00:10:02.066 { 00:10:02.066 "name": "nvmf_tgt_poll_group_001", 00:10:02.066 "admin_qpairs": 0, 00:10:02.066 "io_qpairs": 0, 00:10:02.066 "current_admin_qpairs": 0, 00:10:02.066 "current_io_qpairs": 0, 00:10:02.066 "pending_bdev_io": 0, 00:10:02.066 "completed_nvme_io": 0, 00:10:02.066 "transports": [ 00:10:02.066 { 00:10:02.066 "trtype": "TCP" 00:10:02.066 } 00:10:02.066 ] 00:10:02.066 }, 00:10:02.066 { 00:10:02.066 "name": "nvmf_tgt_poll_group_002", 00:10:02.066 "admin_qpairs": 0, 00:10:02.066 "io_qpairs": 0, 00:10:02.066 "current_admin_qpairs": 0, 00:10:02.066 "current_io_qpairs": 0, 00:10:02.066 "pending_bdev_io": 0, 00:10:02.066 "completed_nvme_io": 0, 00:10:02.066 "transports": [ 00:10:02.066 { 00:10:02.066 "trtype": "TCP" 00:10:02.066 } 00:10:02.066 ] 00:10:02.066 }, 00:10:02.066 { 00:10:02.066 "name": "nvmf_tgt_poll_group_003", 00:10:02.066 "admin_qpairs": 0, 00:10:02.066 "io_qpairs": 0, 00:10:02.066 "current_admin_qpairs": 0, 00:10:02.066 "current_io_qpairs": 0, 00:10:02.066 "pending_bdev_io": 0, 00:10:02.066 "completed_nvme_io": 0, 00:10:02.066 "transports": [ 00:10:02.066 { 00:10:02.066 "trtype": "TCP" 00:10:02.066 } 00:10:02.066 ] 00:10:02.066 } 00:10:02.066 ] 00:10:02.066 }' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.066 Malloc1 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.066 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.326 [2024-07-12 19:06:08.213867] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.2 -s 4420 00:10:02.326 [2024-07-12 19:06:08.240736] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:10:02.326 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:02.326 could not add new controller: failed to write to nvme-fabrics device 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:02.326 19:06:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:03.711 19:06:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:03.711 19:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:03.711 19:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:03.711 19:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:03.711 19:06:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:06.256 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:10:06.256 19:06:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:06.256 [2024-07-12 19:06:11.988281] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be' 00:10:06.256 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:06.256 could not add new controller: failed to write to nvme-fabrics device 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:06.256 19:06:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:07.648 19:06:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:07.648 19:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:07.648 19:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:07.648 19:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:07.648 19:06:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.560 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.821 [2024-07-12 19:06:15.703345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.821 19:06:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:11.207 19:06:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:11.207 19:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:11.207 19:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:11.207 19:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:11.207 19:06:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:13.120 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:13.120 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:13.120 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:13.120 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:13.120 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:13.120 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:13.120 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:13.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:13.380 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.381 [2024-07-12 19:06:19.431168] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:13.381 19:06:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:15.295 19:06:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:15.295 19:06:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:15.295 19:06:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:15.295 19:06:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:15.295 19:06:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:17.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.272 [2024-07-12 19:06:23.168711] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:17.272 19:06:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.657 19:06:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:18.657 19:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:18.657 19:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:18.657 19:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:18.657 19:06:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:21.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 [2024-07-12 19:06:26.910315] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:21.211 19:06:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.597 19:06:28 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:22.597 19:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:22.597 19:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.597 19:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:22.597 19:06:28 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:24.509 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.509 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.769 [2024-07-12 19:06:30.653380] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.769 19:06:30 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.152 19:06:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:10:26.152 19:06:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:26.152 19:06:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:26.152 19:06:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:26.152 19:06:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:28.694 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:28.694 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:28.694 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:28.694 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:28.694 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:28.694 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:28.694 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:28.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.694 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 [2024-07-12 19:06:34.408208] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 [2024-07-12 19:06:34.468328] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 [2024-07-12 19:06:34.536531] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.695 [2024-07-12 19:06:34.596715] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.695 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 [2024-07-12 19:06:34.656897] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:10:28.696 "tick_rate": 2400000000, 00:10:28.696 "poll_groups": [ 00:10:28.696 { 00:10:28.696 "name": "nvmf_tgt_poll_group_000", 00:10:28.696 "admin_qpairs": 0, 00:10:28.696 "io_qpairs": 224, 00:10:28.696 "current_admin_qpairs": 0, 00:10:28.696 "current_io_qpairs": 0, 00:10:28.696 "pending_bdev_io": 0, 00:10:28.696 "completed_nvme_io": 438, 00:10:28.696 "transports": [ 00:10:28.696 { 00:10:28.696 "trtype": "TCP" 00:10:28.696 } 00:10:28.696 ] 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "name": "nvmf_tgt_poll_group_001", 00:10:28.696 "admin_qpairs": 1, 00:10:28.696 "io_qpairs": 223, 00:10:28.696 "current_admin_qpairs": 0, 00:10:28.696 "current_io_qpairs": 0, 00:10:28.696 "pending_bdev_io": 0, 00:10:28.696 "completed_nvme_io": 264, 00:10:28.696 "transports": [ 00:10:28.696 { 00:10:28.696 "trtype": "TCP" 00:10:28.696 } 00:10:28.696 ] 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "name": "nvmf_tgt_poll_group_002", 00:10:28.696 "admin_qpairs": 6, 00:10:28.696 "io_qpairs": 218, 00:10:28.696 "current_admin_qpairs": 0, 00:10:28.696 "current_io_qpairs": 0, 00:10:28.696 "pending_bdev_io": 0, 00:10:28.696 "completed_nvme_io": 312, 00:10:28.696 "transports": [ 00:10:28.696 { 00:10:28.696 "trtype": "TCP" 00:10:28.696 } 00:10:28.696 ] 00:10:28.696 }, 00:10:28.696 { 00:10:28.696 "name": "nvmf_tgt_poll_group_003", 00:10:28.696 "admin_qpairs": 0, 00:10:28.696 "io_qpairs": 224, 00:10:28.696 "current_admin_qpairs": 0, 00:10:28.696 "current_io_qpairs": 0, 00:10:28.696 "pending_bdev_io": 0, 00:10:28.696 "completed_nvme_io": 225, 00:10:28.696 "transports": [ 00:10:28.696 { 00:10:28.696 "trtype": "TCP" 00:10:28.696 } 00:10:28.696 ] 00:10:28.696 } 00:10:28.696 ] 00:10:28.696 }' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:28.696 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:28.957 rmmod nvme_tcp 00:10:28.957 rmmod nvme_fabrics 00:10:28.957 rmmod nvme_keyring 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1277165 ']' 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1277165 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1277165 ']' 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1277165 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1277165 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1277165' 00:10:28.957 killing process with pid 1277165 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1277165 00:10:28.957 19:06:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1277165 00:10:29.219 19:06:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:29.219 19:06:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:29.219 19:06:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:29.219 19:06:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:29.219 19:06:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:29.219 19:06:35 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.219 19:06:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:29.219 19:06:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.131 19:06:37 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:31.131 00:10:31.131 real 0m37.140s 00:10:31.131 user 1m53.340s 00:10:31.131 sys 0m6.902s 00:10:31.131 19:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:31.131 19:06:37 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:31.131 ************************************ 00:10:31.131 END TEST nvmf_rpc 00:10:31.131 ************************************ 00:10:31.131 19:06:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:31.131 19:06:37 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:31.131 19:06:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:31.131 19:06:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:31.131 19:06:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:31.131 ************************************ 00:10:31.131 START TEST nvmf_invalid 00:10:31.131 ************************************ 00:10:31.131 19:06:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:31.393 * Looking for test storage... 00:10:31.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:31.393 19:06:37 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:39.534 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:39.535 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:39.535 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:39.535 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:39.535 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:10:39.535 00:10:39.535 --- 10.0.0.2 ping statistics --- 00:10:39.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.535 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:10:39.535 00:10:39.535 --- 10.0.0.1 ping statistics --- 00:10:39.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.535 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1287483 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1287483 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1287483 ']' 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.535 19:06:44 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:39.535 [2024-07-12 19:06:44.658177] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:10:39.535 [2024-07-12 19:06:44.658256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.535 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.535 [2024-07-12 19:06:44.732998] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.535 [2024-07-12 19:06:44.808657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.535 [2024-07-12 19:06:44.808692] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.535 [2024-07-12 19:06:44.808700] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.535 [2024-07-12 19:06:44.808707] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.535 [2024-07-12 19:06:44.808713] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.535 [2024-07-12 19:06:44.808849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.535 [2024-07-12 19:06:44.808973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.535 [2024-07-12 19:06:44.809146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.535 [2024-07-12 19:06:44.809147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9155 00:10:39.536 [2024-07-12 19:06:45.619094] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:39.536 { 00:10:39.536 "nqn": "nqn.2016-06.io.spdk:cnode9155", 00:10:39.536 "tgt_name": "foobar", 00:10:39.536 "method": "nvmf_create_subsystem", 00:10:39.536 "req_id": 1 00:10:39.536 } 00:10:39.536 Got JSON-RPC error response 00:10:39.536 response: 00:10:39.536 { 00:10:39.536 "code": -32603, 00:10:39.536 "message": "Unable to find target foobar" 00:10:39.536 }' 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:39.536 { 00:10:39.536 "nqn": "nqn.2016-06.io.spdk:cnode9155", 00:10:39.536 "tgt_name": "foobar", 00:10:39.536 "method": "nvmf_create_subsystem", 00:10:39.536 "req_id": 1 00:10:39.536 } 00:10:39.536 Got JSON-RPC error response 00:10:39.536 response: 00:10:39.536 { 00:10:39.536 "code": -32603, 00:10:39.536 "message": "Unable to find target foobar" 00:10:39.536 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:39.536 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode3235 00:10:39.797 [2024-07-12 19:06:45.795727] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3235: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:39.797 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:39.797 { 00:10:39.797 "nqn": "nqn.2016-06.io.spdk:cnode3235", 00:10:39.797 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:39.797 "method": "nvmf_create_subsystem", 00:10:39.797 "req_id": 1 00:10:39.797 } 00:10:39.797 Got JSON-RPC error response 00:10:39.797 response: 00:10:39.797 { 00:10:39.797 "code": -32602, 00:10:39.797 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:39.797 }' 00:10:39.797 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:39.797 { 00:10:39.797 "nqn": "nqn.2016-06.io.spdk:cnode3235", 00:10:39.797 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:39.797 "method": "nvmf_create_subsystem", 00:10:39.797 "req_id": 1 00:10:39.797 } 00:10:39.797 Got JSON-RPC error response 00:10:39.797 response: 00:10:39.797 { 00:10:39.797 "code": -32602, 00:10:39.797 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:39.797 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:39.797 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:39.797 19:06:45 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32417 00:10:40.059 [2024-07-12 19:06:45.972315] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32417: invalid model number 'SPDK_Controller' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:40.059 { 00:10:40.059 "nqn": "nqn.2016-06.io.spdk:cnode32417", 00:10:40.059 "model_number": "SPDK_Controller\u001f", 00:10:40.059 "method": "nvmf_create_subsystem", 00:10:40.059 "req_id": 1 00:10:40.059 } 00:10:40.059 Got JSON-RPC error response 00:10:40.059 response: 00:10:40.059 { 00:10:40.059 "code": -32602, 00:10:40.059 "message": "Invalid MN SPDK_Controller\u001f" 00:10:40.059 }' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:40.059 { 00:10:40.059 "nqn": "nqn.2016-06.io.spdk:cnode32417", 00:10:40.059 "model_number": "SPDK_Controller\u001f", 00:10:40.059 "method": "nvmf_create_subsystem", 00:10:40.059 "req_id": 1 00:10:40.059 } 00:10:40.059 Got JSON-RPC error response 00:10:40.059 response: 00:10:40.059 { 00:10:40.059 "code": -32602, 00:10:40.059 "message": "Invalid MN SPDK_Controller\u001f" 00:10:40.059 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.059 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ q == \- ]] 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'q)DNifXiaOLS3$0x)z|d>' 00:10:40.060 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'q)DNifXiaOLS3$0x)z|d>' nqn.2016-06.io.spdk:cnode25637 00:10:40.322 [2024-07-12 19:06:46.309365] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25637: invalid serial number 'q)DNifXiaOLS3$0x)z|d>' 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:40.322 { 00:10:40.322 "nqn": "nqn.2016-06.io.spdk:cnode25637", 00:10:40.322 "serial_number": "q)DNifXiaOLS3$0x)z|d>", 00:10:40.322 "method": "nvmf_create_subsystem", 00:10:40.322 "req_id": 1 00:10:40.322 } 00:10:40.322 Got JSON-RPC error response 00:10:40.322 response: 00:10:40.322 { 00:10:40.322 "code": -32602, 00:10:40.322 "message": "Invalid SN q)DNifXiaOLS3$0x)z|d>" 00:10:40.322 }' 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:40.322 { 00:10:40.322 "nqn": "nqn.2016-06.io.spdk:cnode25637", 00:10:40.322 "serial_number": "q)DNifXiaOLS3$0x)z|d>", 00:10:40.322 "method": "nvmf_create_subsystem", 00:10:40.322 "req_id": 1 00:10:40.322 } 00:10:40.322 Got JSON-RPC error response 00:10:40.322 response: 00:10:40.322 { 00:10:40.322 "code": -32602, 00:10:40.322 "message": "Invalid SN q)DNifXiaOLS3$0x)z|d>" 00:10:40.322 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:10:40.322 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.323 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:40.585 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ r == \- ]] 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'r1W3wQiS4(u4&8`w!WShP^~@g\xZr,B_}0pC8*$(A' 00:10:40.586 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'r1W3wQiS4(u4&8`w!WShP^~@g\xZr,B_}0pC8*$(A' nqn.2016-06.io.spdk:cnode26740 00:10:40.846 [2024-07-12 19:06:46.790914] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26740: invalid model number 'r1W3wQiS4(u4&8`w!WShP^~@g\xZr,B_}0pC8*$(A' 00:10:40.846 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:10:40.846 { 00:10:40.846 "nqn": "nqn.2016-06.io.spdk:cnode26740", 00:10:40.846 "model_number": "r1W3wQiS4(u4&8`w!WShP^~@g\\xZr,B_}0pC8*$(A", 00:10:40.846 "method": "nvmf_create_subsystem", 00:10:40.846 "req_id": 1 00:10:40.847 } 00:10:40.847 Got JSON-RPC error response 00:10:40.847 response: 00:10:40.847 { 00:10:40.847 "code": -32602, 00:10:40.847 "message": "Invalid MN r1W3wQiS4(u4&8`w!WShP^~@g\\xZr,B_}0pC8*$(A" 00:10:40.847 }' 00:10:40.847 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:10:40.847 { 00:10:40.847 "nqn": "nqn.2016-06.io.spdk:cnode26740", 00:10:40.847 "model_number": "r1W3wQiS4(u4&8`w!WShP^~@g\\xZr,B_}0pC8*$(A", 00:10:40.847 "method": "nvmf_create_subsystem", 00:10:40.847 "req_id": 1 00:10:40.847 } 00:10:40.847 Got JSON-RPC error response 00:10:40.847 response: 00:10:40.847 { 00:10:40.847 "code": -32602, 00:10:40.847 "message": "Invalid MN r1W3wQiS4(u4&8`w!WShP^~@g\\xZr,B_}0pC8*$(A" 00:10:40.847 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:40.847 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:10:40.847 [2024-07-12 19:06:46.959562] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:41.107 19:06:46 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:10:41.107 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:10:41.107 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:10:41.107 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:10:41.107 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:10:41.107 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:10:41.368 [2024-07-12 19:06:47.310137] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:10:41.368 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:10:41.368 { 00:10:41.368 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:41.368 "listen_address": { 00:10:41.368 "trtype": "tcp", 00:10:41.368 "traddr": "", 00:10:41.368 "trsvcid": "4421" 00:10:41.368 }, 00:10:41.368 "method": "nvmf_subsystem_remove_listener", 00:10:41.368 "req_id": 1 00:10:41.368 } 00:10:41.368 Got JSON-RPC error response 00:10:41.368 response: 00:10:41.368 { 00:10:41.368 "code": -32602, 00:10:41.368 "message": "Invalid parameters" 00:10:41.368 }' 00:10:41.368 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:10:41.368 { 00:10:41.368 "nqn": "nqn.2016-06.io.spdk:cnode", 00:10:41.368 "listen_address": { 00:10:41.368 "trtype": "tcp", 00:10:41.368 "traddr": "", 00:10:41.368 "trsvcid": "4421" 00:10:41.368 }, 00:10:41.368 "method": "nvmf_subsystem_remove_listener", 00:10:41.368 "req_id": 1 00:10:41.368 } 00:10:41.368 Got JSON-RPC error response 00:10:41.368 response: 00:10:41.368 { 00:10:41.368 "code": -32602, 00:10:41.368 "message": "Invalid parameters" 00:10:41.368 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:10:41.368 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13564 -i 0 00:10:41.368 [2024-07-12 19:06:47.474613] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13564: invalid cntlid range [0-65519] 00:10:41.629 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:10:41.629 { 00:10:41.629 "nqn": "nqn.2016-06.io.spdk:cnode13564", 00:10:41.629 "min_cntlid": 0, 00:10:41.629 "method": "nvmf_create_subsystem", 00:10:41.629 "req_id": 1 00:10:41.629 } 00:10:41.629 Got JSON-RPC error response 00:10:41.629 response: 00:10:41.629 { 00:10:41.629 "code": -32602, 00:10:41.629 "message": "Invalid cntlid range [0-65519]" 00:10:41.629 }' 00:10:41.629 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:10:41.629 { 00:10:41.629 "nqn": "nqn.2016-06.io.spdk:cnode13564", 00:10:41.629 "min_cntlid": 0, 00:10:41.629 "method": "nvmf_create_subsystem", 00:10:41.629 "req_id": 1 00:10:41.629 } 00:10:41.629 Got JSON-RPC error response 00:10:41.629 response: 00:10:41.629 { 00:10:41.629 "code": -32602, 00:10:41.629 "message": "Invalid cntlid range [0-65519]" 00:10:41.629 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:41.629 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode28523 -i 65520 00:10:41.629 [2024-07-12 19:06:47.651181] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28523: invalid cntlid range [65520-65519] 00:10:41.629 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:10:41.629 { 00:10:41.629 "nqn": "nqn.2016-06.io.spdk:cnode28523", 00:10:41.629 "min_cntlid": 65520, 00:10:41.629 "method": "nvmf_create_subsystem", 00:10:41.629 "req_id": 1 00:10:41.629 } 00:10:41.629 Got JSON-RPC error response 00:10:41.629 response: 00:10:41.629 { 00:10:41.629 "code": -32602, 00:10:41.629 "message": "Invalid cntlid range [65520-65519]" 00:10:41.629 }' 00:10:41.629 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:10:41.629 { 00:10:41.629 "nqn": "nqn.2016-06.io.spdk:cnode28523", 00:10:41.629 "min_cntlid": 65520, 00:10:41.629 "method": "nvmf_create_subsystem", 00:10:41.629 "req_id": 1 00:10:41.629 } 00:10:41.629 Got JSON-RPC error response 00:10:41.629 response: 00:10:41.629 { 00:10:41.629 "code": -32602, 00:10:41.629 "message": "Invalid cntlid range [65520-65519]" 00:10:41.629 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:41.630 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15405 -I 0 00:10:41.891 [2024-07-12 19:06:47.827790] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15405: invalid cntlid range [1-0] 00:10:41.891 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:10:41.891 { 00:10:41.891 "nqn": "nqn.2016-06.io.spdk:cnode15405", 00:10:41.891 "max_cntlid": 0, 00:10:41.891 "method": "nvmf_create_subsystem", 00:10:41.891 "req_id": 1 00:10:41.891 } 00:10:41.891 Got JSON-RPC error response 00:10:41.891 response: 00:10:41.891 { 00:10:41.891 "code": -32602, 00:10:41.892 "message": "Invalid cntlid range [1-0]" 00:10:41.892 }' 00:10:41.892 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:10:41.892 { 00:10:41.892 "nqn": "nqn.2016-06.io.spdk:cnode15405", 00:10:41.892 "max_cntlid": 0, 00:10:41.892 "method": "nvmf_create_subsystem", 00:10:41.892 "req_id": 1 00:10:41.892 } 00:10:41.892 Got JSON-RPC error response 00:10:41.892 response: 00:10:41.892 { 00:10:41.892 "code": -32602, 00:10:41.892 "message": "Invalid cntlid range [1-0]" 00:10:41.892 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:41.892 19:06:47 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1848 -I 65520 00:10:41.892 [2024-07-12 19:06:47.984259] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1848: invalid cntlid range [1-65520] 00:10:41.892 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:10:41.892 { 00:10:41.892 "nqn": "nqn.2016-06.io.spdk:cnode1848", 00:10:41.892 "max_cntlid": 65520, 00:10:41.892 "method": "nvmf_create_subsystem", 00:10:41.892 "req_id": 1 00:10:41.892 } 00:10:41.892 Got JSON-RPC error response 00:10:41.892 response: 00:10:41.892 { 00:10:41.892 "code": -32602, 00:10:41.892 "message": "Invalid cntlid range [1-65520]" 00:10:41.892 }' 00:10:41.892 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:10:41.892 { 00:10:41.892 "nqn": "nqn.2016-06.io.spdk:cnode1848", 00:10:41.892 "max_cntlid": 65520, 00:10:41.892 "method": "nvmf_create_subsystem", 00:10:41.892 "req_id": 1 00:10:41.892 } 00:10:41.892 Got JSON-RPC error response 00:10:41.892 response: 00:10:41.892 { 00:10:41.892 "code": -32602, 00:10:41.892 "message": "Invalid cntlid range [1-65520]" 00:10:41.892 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:41.892 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23837 -i 6 -I 5 00:10:42.152 [2024-07-12 19:06:48.152774] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23837: invalid cntlid range [6-5] 00:10:42.152 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:10:42.153 { 00:10:42.153 "nqn": "nqn.2016-06.io.spdk:cnode23837", 00:10:42.153 "min_cntlid": 6, 00:10:42.153 "max_cntlid": 5, 00:10:42.153 "method": "nvmf_create_subsystem", 00:10:42.153 "req_id": 1 00:10:42.153 } 00:10:42.153 Got JSON-RPC error response 00:10:42.153 response: 00:10:42.153 { 00:10:42.153 "code": -32602, 00:10:42.153 "message": "Invalid cntlid range [6-5]" 00:10:42.153 }' 00:10:42.153 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:10:42.153 { 00:10:42.153 "nqn": "nqn.2016-06.io.spdk:cnode23837", 00:10:42.153 "min_cntlid": 6, 00:10:42.153 "max_cntlid": 5, 00:10:42.153 "method": "nvmf_create_subsystem", 00:10:42.153 "req_id": 1 00:10:42.153 } 00:10:42.153 Got JSON-RPC error response 00:10:42.153 response: 00:10:42.153 { 00:10:42.153 "code": -32602, 00:10:42.153 "message": "Invalid cntlid range [6-5]" 00:10:42.153 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:10:42.153 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:10:42.153 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:10:42.153 { 00:10:42.153 "name": "foobar", 00:10:42.153 "method": "nvmf_delete_target", 00:10:42.153 "req_id": 1 00:10:42.153 } 00:10:42.153 Got JSON-RPC error response 00:10:42.153 response: 00:10:42.153 { 00:10:42.153 "code": -32602, 00:10:42.153 "message": "The specified target doesn'\''t exist, cannot delete it." 00:10:42.153 }' 00:10:42.153 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:10:42.153 { 00:10:42.153 "name": "foobar", 00:10:42.153 "method": "nvmf_delete_target", 00:10:42.153 "req_id": 1 00:10:42.153 } 00:10:42.153 Got JSON-RPC error response 00:10:42.153 response: 00:10:42.153 { 00:10:42.153 "code": -32602, 00:10:42.153 "message": "The specified target doesn't exist, cannot delete it." 00:10:42.153 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:10:42.153 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:10:42.153 19:06:48 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:10:42.153 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:42.153 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:42.415 rmmod nvme_tcp 00:10:42.415 rmmod nvme_fabrics 00:10:42.415 rmmod nvme_keyring 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 1287483 ']' 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 1287483 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 1287483 ']' 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 1287483 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1287483 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1287483' 00:10:42.415 killing process with pid 1287483 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 1287483 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 1287483 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:42.415 19:06:48 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.965 19:06:50 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:44.965 00:10:44.965 real 0m13.373s 00:10:44.965 user 0m19.116s 00:10:44.965 sys 0m6.242s 00:10:44.965 19:06:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:44.965 19:06:50 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:44.965 ************************************ 00:10:44.965 END TEST nvmf_invalid 00:10:44.965 ************************************ 00:10:44.965 19:06:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:44.965 19:06:50 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:44.965 19:06:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:44.965 19:06:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.965 19:06:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:44.965 ************************************ 00:10:44.965 START TEST nvmf_abort 00:10:44.965 ************************************ 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:44.965 * Looking for test storage... 00:10:44.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.965 19:06:50 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:10:51.557 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:10:51.557 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:10:51.557 Found net devices under 0000:4b:00.0: cvl_0_0 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:10:51.557 Found net devices under 0000:4b:00.1: cvl_0_1 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:51.557 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:51.818 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:51.818 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.342 ms 00:10:51.818 00:10:51.818 --- 10.0.0.2 ping statistics --- 00:10:51.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.818 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:51.818 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:51.818 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.329 ms 00:10:51.818 00:10:51.818 --- 10.0.0.1 ping statistics --- 00:10:51.818 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:51.818 rtt min/avg/max/mdev = 0.329/0.329/0.329/0.000 ms 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1292507 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1292507 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1292507 ']' 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:51.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:51.818 19:06:57 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.079 [2024-07-12 19:06:57.953219] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:10:52.079 [2024-07-12 19:06:57.953286] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.079 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.079 [2024-07-12 19:06:58.040097] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:52.079 [2024-07-12 19:06:58.135673] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.079 [2024-07-12 19:06:58.135723] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.079 [2024-07-12 19:06:58.135731] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.079 [2024-07-12 19:06:58.135738] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.079 [2024-07-12 19:06:58.135745] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.079 [2024-07-12 19:06:58.135875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:52.079 [2024-07-12 19:06:58.136044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:52.079 [2024-07-12 19:06:58.136045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.650 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.650 [2024-07-12 19:06:58.778021] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:52.910 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.911 Malloc0 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.911 Delay0 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.911 [2024-07-12 19:06:58.861110] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:52.911 19:06:58 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:52.911 EAL: No free 2048 kB hugepages reported on node 1 00:10:52.911 [2024-07-12 19:06:58.979324] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:54.896 Initializing NVMe Controllers 00:10:54.896 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:54.896 controller IO queue size 128 less than required 00:10:54.896 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:54.896 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:54.896 Initialization complete. Launching workers. 00:10:54.896 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32875 00:10:54.896 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32936, failed to submit 62 00:10:54.896 success 32879, unsuccess 57, failed 0 00:10:54.896 19:07:01 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:54.896 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:54.896 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:55.156 rmmod nvme_tcp 00:10:55.156 rmmod nvme_fabrics 00:10:55.156 rmmod nvme_keyring 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1292507 ']' 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1292507 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1292507 ']' 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1292507 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1292507 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1292507' 00:10:55.156 killing process with pid 1292507 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1292507 00:10:55.156 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1292507 00:10:55.417 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:55.417 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:55.417 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:55.417 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:55.417 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:55.418 19:07:01 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:55.418 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:55.418 19:07:01 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.330 19:07:03 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:57.330 00:10:57.330 real 0m12.697s 00:10:57.330 user 0m13.246s 00:10:57.330 sys 0m6.147s 00:10:57.330 19:07:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:57.330 19:07:03 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:57.330 ************************************ 00:10:57.330 END TEST nvmf_abort 00:10:57.330 ************************************ 00:10:57.330 19:07:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:57.330 19:07:03 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:57.330 19:07:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:57.330 19:07:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:57.330 19:07:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:57.330 ************************************ 00:10:57.330 START TEST nvmf_ns_hotplug_stress 00:10:57.331 ************************************ 00:10:57.331 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:57.592 * Looking for test storage... 00:10:57.592 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:57.592 19:07:03 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:05.742 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:05.742 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:05.742 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:05.742 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.742 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:05.743 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.489 ms 00:11:05.743 00:11:05.743 --- 10.0.0.2 ping statistics --- 00:11:05.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.743 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.326 ms 00:11:05.743 00:11:05.743 --- 10.0.0.1 ping statistics --- 00:11:05.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.743 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1297349 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1297349 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1297349 ']' 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.743 19:07:10 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.743 [2024-07-12 19:07:10.867997] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:11:05.743 [2024-07-12 19:07:10.868047] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.743 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.743 [2024-07-12 19:07:10.951627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:05.743 [2024-07-12 19:07:11.028258] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.743 [2024-07-12 19:07:11.028310] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.743 [2024-07-12 19:07:11.028318] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.743 [2024-07-12 19:07:11.028325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.743 [2024-07-12 19:07:11.028331] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.743 [2024-07-12 19:07:11.028478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:05.743 [2024-07-12 19:07:11.028633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:05.743 [2024-07-12 19:07:11.028633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:05.743 [2024-07-12 19:07:11.821401] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:05.743 19:07:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:06.003 19:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.263 [2024-07-12 19:07:12.162899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.263 19:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:06.263 19:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:11:06.523 Malloc0 00:11:06.523 19:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:06.783 Delay0 00:11:06.784 19:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.784 19:07:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:11:07.044 NULL1 00:11:07.044 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:07.044 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:11:07.044 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1297727 00:11:07.044 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:07.044 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.304 EAL: No free 2048 kB hugepages reported on node 1 00:11:07.304 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.564 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:11:07.564 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:11:07.564 [2024-07-12 19:07:13.647704] bdev.c:5033:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:11:07.564 true 00:11:07.565 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:07.565 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.825 19:07:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.085 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:11:08.085 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:11:08.085 true 00:11:08.085 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:08.085 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.346 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.606 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:11:08.606 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:11:08.606 true 00:11:08.607 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:08.607 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.868 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.868 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:11:08.868 19:07:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:11:09.130 true 00:11:09.130 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:09.130 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.391 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.391 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:11:09.391 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:11:09.651 true 00:11:09.651 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:09.651 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.912 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.912 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:11:09.912 19:07:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:11:10.173 true 00:11:10.173 19:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:10.173 19:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:10.434 19:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:10.434 19:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:11:10.434 19:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:11:10.695 true 00:11:10.695 19:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:10.695 19:07:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.638 Read completed with error (sct=0, sc=11) 00:11:11.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.638 19:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:11.638 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:11.638 19:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:11:11.638 19:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:11:11.899 true 00:11:11.899 19:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:11.899 19:07:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.841 19:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:12.841 19:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:11:12.841 19:07:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:11:13.102 true 00:11:13.102 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:13.102 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.102 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.362 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:11:13.362 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:11:13.623 true 00:11:13.623 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:13.623 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.623 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:13.883 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:11:13.883 19:07:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:11:14.143 true 00:11:14.143 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:14.143 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.143 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.404 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:11:14.404 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:11:14.664 true 00:11:14.664 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:14.664 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.664 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:14.925 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:11:14.925 19:07:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:11:14.925 true 00:11:15.185 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:15.185 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.185 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.446 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:11:15.446 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:11:15.446 true 00:11:15.446 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:15.446 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:15.706 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:15.967 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:11:15.967 19:07:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:11:15.967 true 00:11:15.967 19:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:15.967 19:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:16.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.906 19:07:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:16.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:16.906 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:17.167 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:11:17.167 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:11:17.427 true 00:11:17.427 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:17.427 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.427 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:17.687 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:11:17.687 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:11:17.947 true 00:11:17.947 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:17.947 19:07:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:17.947 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.206 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:11:18.206 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:11:18.465 true 00:11:18.465 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:18.465 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.465 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:18.725 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:11:18.725 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:11:18.985 true 00:11:18.985 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:18.985 19:07:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:18.985 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.246 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:11:19.246 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:11:19.246 true 00:11:19.505 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:19.505 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:19.505 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:19.764 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:11:19.764 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:11:19.765 true 00:11:20.024 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:20.024 19:07:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.024 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.284 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:11:20.284 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:11:20.284 true 00:11:20.284 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:20.284 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:20.544 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:20.804 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:11:20.804 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:11:20.804 true 00:11:20.804 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:20.804 19:07:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.064 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.323 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:11:21.323 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:11:21.323 true 00:11:21.324 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:21.324 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:21.583 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:21.843 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:11:21.843 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:11:21.843 true 00:11:21.843 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:21.843 19:07:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.102 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.362 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:11:22.362 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:11:22.362 true 00:11:22.362 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:22.362 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:22.622 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:22.882 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:11:22.882 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:11:22.882 true 00:11:22.882 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:22.882 19:07:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:23.143 19:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:23.143 19:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:11:23.143 19:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:11:23.403 true 00:11:23.403 19:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:23.403 19:07:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.424 19:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.424 19:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:11:24.424 19:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:11:24.684 true 00:11:24.684 19:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:24.684 19:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.684 19:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:24.945 19:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:11:24.945 19:07:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:11:25.205 true 00:11:25.205 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:25.205 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.205 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.465 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:11:25.465 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:11:25.724 true 00:11:25.725 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:25.725 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:25.725 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:25.984 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:11:25.984 19:07:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:11:25.984 true 00:11:25.984 19:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:25.984 19:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:26.243 19:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:26.502 19:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:11:26.502 19:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:11:26.502 true 00:11:26.502 19:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:26.502 19:07:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.441 19:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:27.702 19:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:11:27.702 19:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:11:27.702 true 00:11:27.702 19:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:27.702 19:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:27.963 19:07:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:28.222 19:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:28.222 19:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:28.222 true 00:11:28.222 19:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:28.222 19:07:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:29.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:29.605 19:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:29.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:29.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:29.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:29.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:29.605 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:29.605 19:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:29.605 19:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:29.866 true 00:11:29.866 19:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:29.866 19:07:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:30.808 19:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:30.808 19:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:30.808 19:07:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:31.069 true 00:11:31.069 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:31.069 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.069 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.330 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:31.330 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:31.591 true 00:11:31.591 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:31.591 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:31.591 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:31.852 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:31.852 19:07:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:32.113 true 00:11:32.113 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:32.113 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.113 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.375 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:32.375 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:32.375 true 00:11:32.636 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:32.636 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:32.636 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:32.898 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:32.898 19:07:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:32.898 true 00:11:33.159 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:33.159 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.159 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.422 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:33.422 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:33.422 true 00:11:33.422 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:33.422 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:33.682 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:33.943 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:33.943 19:07:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:33.943 true 00:11:33.943 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:33.943 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.203 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.464 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:34.464 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:34.464 true 00:11:34.464 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:34.464 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:34.725 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:34.986 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:34.986 19:07:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:34.986 true 00:11:34.986 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:34.986 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.247 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.508 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:35.508 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:35.508 true 00:11:35.508 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:35.508 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:35.768 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:35.768 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:35.768 19:07:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:36.028 true 00:11:36.028 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:36.028 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.288 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.288 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:36.288 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:36.547 true 00:11:36.547 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:36.547 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:36.806 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:36.806 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:36.806 19:07:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:37.064 true 00:11:37.064 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:37.064 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.323 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:37.323 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:37.323 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:37.582 Initializing NVMe Controllers 00:11:37.582 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:37.582 Controller IO queue size 128, less than required. 00:11:37.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:37.582 Controller IO queue size 128, less than required. 00:11:37.582 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:37.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:37.582 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:37.582 Initialization complete. Launching workers. 00:11:37.582 ======================================================== 00:11:37.582 Latency(us) 00:11:37.582 Device Information : IOPS MiB/s Average min max 00:11:37.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 425.89 0.21 67309.80 2221.09 1140524.79 00:11:37.582 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 6777.85 3.31 18822.01 1742.19 521726.52 00:11:37.582 ======================================================== 00:11:37.582 Total : 7203.75 3.52 21688.67 1742.19 1140524.79 00:11:37.582 00:11:37.582 true 00:11:37.582 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1297727 00:11:37.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1297727) - No such process 00:11:37.582 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1297727 00:11:37.582 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:37.842 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:37.842 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:37.842 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:37.842 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:37.842 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:37.842 19:07:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:38.102 null0 00:11:38.102 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:38.102 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:38.102 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:38.362 null1 00:11:38.362 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:38.362 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:38.362 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:38.362 null2 00:11:38.362 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:38.362 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:38.362 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:38.623 null3 00:11:38.623 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:38.623 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:38.623 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:38.884 null4 00:11:38.884 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:38.884 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:38.884 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:38.884 null5 00:11:38.884 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:38.884 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:38.884 19:07:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:39.144 null6 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:39.144 null7 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.144 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1304292 1304294 1304297 1304299 1304303 1304306 1304310 1304313 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:39.405 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:39.666 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.927 19:07:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:39.927 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:40.187 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:40.448 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:40.708 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.968 19:07:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:40.968 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:41.228 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.488 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:41.749 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.009 19:07:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:42.009 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:42.270 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:42.530 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.530 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.530 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:42.530 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.530 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.530 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:42.530 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.530 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.531 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:42.792 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:42.792 rmmod nvme_tcp 00:11:42.792 rmmod nvme_fabrics 00:11:43.053 rmmod nvme_keyring 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1297349 ']' 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1297349 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1297349 ']' 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1297349 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:43.053 19:07:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1297349 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1297349' 00:11:43.053 killing process with pid 1297349 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1297349 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1297349 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:43.053 19:07:49 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.602 19:07:51 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:45.602 00:11:45.602 real 0m47.762s 00:11:45.602 user 3m12.274s 00:11:45.602 sys 0m15.633s 00:11:45.602 19:07:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:45.602 19:07:51 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:45.602 ************************************ 00:11:45.602 END TEST nvmf_ns_hotplug_stress 00:11:45.602 ************************************ 00:11:45.602 19:07:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:45.602 19:07:51 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:45.602 19:07:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:45.602 19:07:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:45.602 19:07:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:45.602 ************************************ 00:11:45.602 START TEST nvmf_connect_stress 00:11:45.602 ************************************ 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:45.602 * Looking for test storage... 00:11:45.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:45.602 19:07:51 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.193 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:11:52.194 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:11:52.194 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:11:52.194 Found net devices under 0000:4b:00.0: cvl_0_0 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:11:52.194 Found net devices under 0000:4b:00.1: cvl_0_1 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.194 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.549 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.549 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:11:52.549 00:11:52.549 --- 10.0.0.2 ping statistics --- 00:11:52.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.549 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.549 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.549 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:11:52.549 00:11:52.549 --- 10.0.0.1 ping statistics --- 00:11:52.549 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.549 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1309390 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1309390 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1309390 ']' 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:52.549 19:07:58 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:52.811 [2024-07-12 19:07:58.658555] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:11:52.811 [2024-07-12 19:07:58.658620] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.811 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.811 [2024-07-12 19:07:58.745689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:52.811 [2024-07-12 19:07:58.838983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.811 [2024-07-12 19:07:58.839040] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.811 [2024-07-12 19:07:58.839048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.811 [2024-07-12 19:07:58.839055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.811 [2024-07-12 19:07:58.839062] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.811 [2024-07-12 19:07:58.839206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:52.811 [2024-07-12 19:07:58.839384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:52.811 [2024-07-12 19:07:58.839384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.382 [2024-07-12 19:07:59.468327] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.382 [2024-07-12 19:07:59.501265] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.382 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.643 NULL1 00:11:53.643 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.643 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1309683 00:11:53.643 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:53.643 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:53.643 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:53.643 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:53.643 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 EAL: No free 2048 kB hugepages reported on node 1 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.644 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:53.904 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:53.904 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:53.904 19:07:59 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:53.904 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:53.904 19:07:59 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.165 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.165 19:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:54.165 19:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.165 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.165 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.736 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.736 19:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:54.736 19:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.736 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.736 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:54.996 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:54.996 19:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:54.996 19:08:00 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:54.996 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:54.996 19:08:00 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.257 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.257 19:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:55.257 19:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.257 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.257 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.517 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.517 19:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:55.517 19:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.517 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.517 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:55.777 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:55.777 19:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:55.777 19:08:01 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:55.777 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:55.777 19:08:01 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.348 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.348 19:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:56.348 19:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.348 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.348 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.608 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.608 19:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:56.608 19:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.608 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.608 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:56.867 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:56.867 19:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:56.867 19:08:02 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:56.867 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:56.867 19:08:02 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.127 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.127 19:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:57.127 19:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.128 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.128 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.698 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.698 19:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:57.698 19:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.698 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.698 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:57.959 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:57.959 19:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:57.959 19:08:03 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:57.959 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:57.959 19:08:03 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.220 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.220 19:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:58.220 19:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.220 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.220 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.482 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.482 19:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:58.482 19:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.482 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.482 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:58.743 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:58.743 19:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:58.743 19:08:04 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:58.743 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:58.743 19:08:04 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.314 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.314 19:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:59.314 19:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.314 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.314 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.575 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.575 19:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:59.575 19:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.575 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.575 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:59.837 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.837 19:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:11:59.837 19:08:05 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:59.837 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.837 19:08:05 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.098 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.098 19:08:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:00.098 19:08:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.098 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.098 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.358 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.358 19:08:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:00.358 19:08:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.358 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.358 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:00.930 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:00.930 19:08:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:00.930 19:08:06 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:00.930 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:00.930 19:08:06 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.191 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.191 19:08:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:01.191 19:08:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.191 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.191 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.452 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.452 19:08:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:01.452 19:08:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.452 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.452 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.713 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.713 19:08:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:01.713 19:08:07 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.713 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.713 19:08:07 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:01.974 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:01.974 19:08:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:01.974 19:08:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:01.974 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:01.974 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.547 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.547 19:08:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:02.547 19:08:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.547 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.547 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:02.808 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.808 19:08:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:02.808 19:08:08 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:02.808 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.808 19:08:08 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.068 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.068 19:08:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:03.068 19:08:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.068 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.068 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.329 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.329 19:08:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:03.329 19:08:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.329 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.329 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.593 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:03.593 19:08:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:03.593 19:08:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:03.593 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:03.593 19:08:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:03.854 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1309683 00:12:04.114 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1309683) - No such process 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1309683 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:04.114 rmmod nvme_tcp 00:12:04.114 rmmod nvme_fabrics 00:12:04.114 rmmod nvme_keyring 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1309390 ']' 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1309390 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1309390 ']' 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1309390 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1309390 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1309390' 00:12:04.114 killing process with pid 1309390 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1309390 00:12:04.114 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1309390 00:12:04.376 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:04.376 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:04.376 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:04.376 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:04.376 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:04.376 19:08:10 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.376 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:04.376 19:08:10 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.291 19:08:12 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:06.291 00:12:06.291 real 0m21.075s 00:12:06.291 user 0m43.105s 00:12:06.291 sys 0m8.565s 00:12:06.291 19:08:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:06.291 19:08:12 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.291 ************************************ 00:12:06.291 END TEST nvmf_connect_stress 00:12:06.291 ************************************ 00:12:06.291 19:08:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:06.291 19:08:12 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:06.291 19:08:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:06.291 19:08:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:06.291 19:08:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:06.553 ************************************ 00:12:06.553 START TEST nvmf_fused_ordering 00:12:06.553 ************************************ 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:06.553 * Looking for test storage... 00:12:06.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:12:06.553 19:08:12 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:13.141 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:13.141 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:13.141 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:13.141 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:13.141 19:08:18 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:13.141 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:13.141 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:12:13.141 00:12:13.141 --- 10.0.0.2 ping statistics --- 00:12:13.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.141 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:13.141 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:13.141 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:12:13.141 00:12:13.141 --- 10.0.0.1 ping statistics --- 00:12:13.141 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:13.141 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:13.141 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1315761 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1315761 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1315761 ']' 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:13.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:13.142 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.142 [2024-07-12 19:08:19.184918] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:12:13.142 [2024-07-12 19:08:19.184984] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:13.142 EAL: No free 2048 kB hugepages reported on node 1 00:12:13.402 [2024-07-12 19:08:19.272724] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:13.402 [2024-07-12 19:08:19.365794] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:13.402 [2024-07-12 19:08:19.365851] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:13.402 [2024-07-12 19:08:19.365859] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:13.402 [2024-07-12 19:08:19.365865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:13.402 [2024-07-12 19:08:19.365871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:13.402 [2024-07-12 19:08:19.365898] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 [2024-07-12 19:08:19.969155] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 [2024-07-12 19:08:19.985311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 NULL1 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.971 19:08:19 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 19:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.971 19:08:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:13.971 19:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:13.971 19:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:13.971 19:08:20 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:13.971 19:08:20 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:13.971 [2024-07-12 19:08:20.039619] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:12:13.971 [2024-07-12 19:08:20.039667] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1315895 ] 00:12:13.971 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.539 Attached to nqn.2016-06.io.spdk:cnode1 00:12:14.539 Namespace ID: 1 size: 1GB 00:12:14.539 fused_ordering(0) 00:12:14.539 fused_ordering(1) 00:12:14.539 fused_ordering(2) 00:12:14.539 fused_ordering(3) 00:12:14.539 fused_ordering(4) 00:12:14.539 fused_ordering(5) 00:12:14.539 fused_ordering(6) 00:12:14.539 fused_ordering(7) 00:12:14.539 fused_ordering(8) 00:12:14.539 fused_ordering(9) 00:12:14.539 fused_ordering(10) 00:12:14.539 fused_ordering(11) 00:12:14.539 fused_ordering(12) 00:12:14.539 fused_ordering(13) 00:12:14.539 fused_ordering(14) 00:12:14.539 fused_ordering(15) 00:12:14.539 fused_ordering(16) 00:12:14.539 fused_ordering(17) 00:12:14.539 fused_ordering(18) 00:12:14.539 fused_ordering(19) 00:12:14.539 fused_ordering(20) 00:12:14.539 fused_ordering(21) 00:12:14.540 fused_ordering(22) 00:12:14.540 fused_ordering(23) 00:12:14.540 fused_ordering(24) 00:12:14.540 fused_ordering(25) 00:12:14.540 fused_ordering(26) 00:12:14.540 fused_ordering(27) 00:12:14.540 fused_ordering(28) 00:12:14.540 fused_ordering(29) 00:12:14.540 fused_ordering(30) 00:12:14.540 fused_ordering(31) 00:12:14.540 fused_ordering(32) 00:12:14.540 fused_ordering(33) 00:12:14.540 fused_ordering(34) 00:12:14.540 fused_ordering(35) 00:12:14.540 fused_ordering(36) 00:12:14.540 fused_ordering(37) 00:12:14.540 fused_ordering(38) 00:12:14.540 fused_ordering(39) 00:12:14.540 fused_ordering(40) 00:12:14.540 fused_ordering(41) 00:12:14.540 fused_ordering(42) 00:12:14.540 fused_ordering(43) 00:12:14.540 fused_ordering(44) 00:12:14.540 fused_ordering(45) 00:12:14.540 fused_ordering(46) 00:12:14.540 fused_ordering(47) 00:12:14.540 fused_ordering(48) 00:12:14.540 fused_ordering(49) 00:12:14.540 fused_ordering(50) 00:12:14.540 fused_ordering(51) 00:12:14.540 fused_ordering(52) 00:12:14.540 fused_ordering(53) 00:12:14.540 fused_ordering(54) 00:12:14.540 fused_ordering(55) 00:12:14.540 fused_ordering(56) 00:12:14.540 fused_ordering(57) 00:12:14.540 fused_ordering(58) 00:12:14.540 fused_ordering(59) 00:12:14.540 fused_ordering(60) 00:12:14.540 fused_ordering(61) 00:12:14.540 fused_ordering(62) 00:12:14.540 fused_ordering(63) 00:12:14.540 fused_ordering(64) 00:12:14.540 fused_ordering(65) 00:12:14.540 fused_ordering(66) 00:12:14.540 fused_ordering(67) 00:12:14.540 fused_ordering(68) 00:12:14.540 fused_ordering(69) 00:12:14.540 fused_ordering(70) 00:12:14.540 fused_ordering(71) 00:12:14.540 fused_ordering(72) 00:12:14.540 fused_ordering(73) 00:12:14.540 fused_ordering(74) 00:12:14.540 fused_ordering(75) 00:12:14.540 fused_ordering(76) 00:12:14.540 fused_ordering(77) 00:12:14.540 fused_ordering(78) 00:12:14.540 fused_ordering(79) 00:12:14.540 fused_ordering(80) 00:12:14.540 fused_ordering(81) 00:12:14.540 fused_ordering(82) 00:12:14.540 fused_ordering(83) 00:12:14.540 fused_ordering(84) 00:12:14.540 fused_ordering(85) 00:12:14.540 fused_ordering(86) 00:12:14.540 fused_ordering(87) 00:12:14.540 fused_ordering(88) 00:12:14.540 fused_ordering(89) 00:12:14.540 fused_ordering(90) 00:12:14.540 fused_ordering(91) 00:12:14.540 fused_ordering(92) 00:12:14.540 fused_ordering(93) 00:12:14.540 fused_ordering(94) 00:12:14.540 fused_ordering(95) 00:12:14.540 fused_ordering(96) 00:12:14.540 fused_ordering(97) 00:12:14.540 fused_ordering(98) 00:12:14.540 fused_ordering(99) 00:12:14.540 fused_ordering(100) 00:12:14.540 fused_ordering(101) 00:12:14.540 fused_ordering(102) 00:12:14.540 fused_ordering(103) 00:12:14.540 fused_ordering(104) 00:12:14.540 fused_ordering(105) 00:12:14.540 fused_ordering(106) 00:12:14.540 fused_ordering(107) 00:12:14.540 fused_ordering(108) 00:12:14.540 fused_ordering(109) 00:12:14.540 fused_ordering(110) 00:12:14.540 fused_ordering(111) 00:12:14.540 fused_ordering(112) 00:12:14.540 fused_ordering(113) 00:12:14.540 fused_ordering(114) 00:12:14.540 fused_ordering(115) 00:12:14.540 fused_ordering(116) 00:12:14.540 fused_ordering(117) 00:12:14.540 fused_ordering(118) 00:12:14.540 fused_ordering(119) 00:12:14.540 fused_ordering(120) 00:12:14.540 fused_ordering(121) 00:12:14.540 fused_ordering(122) 00:12:14.540 fused_ordering(123) 00:12:14.540 fused_ordering(124) 00:12:14.540 fused_ordering(125) 00:12:14.540 fused_ordering(126) 00:12:14.540 fused_ordering(127) 00:12:14.540 fused_ordering(128) 00:12:14.540 fused_ordering(129) 00:12:14.540 fused_ordering(130) 00:12:14.540 fused_ordering(131) 00:12:14.540 fused_ordering(132) 00:12:14.540 fused_ordering(133) 00:12:14.540 fused_ordering(134) 00:12:14.540 fused_ordering(135) 00:12:14.540 fused_ordering(136) 00:12:14.540 fused_ordering(137) 00:12:14.540 fused_ordering(138) 00:12:14.540 fused_ordering(139) 00:12:14.540 fused_ordering(140) 00:12:14.540 fused_ordering(141) 00:12:14.540 fused_ordering(142) 00:12:14.540 fused_ordering(143) 00:12:14.540 fused_ordering(144) 00:12:14.540 fused_ordering(145) 00:12:14.540 fused_ordering(146) 00:12:14.540 fused_ordering(147) 00:12:14.540 fused_ordering(148) 00:12:14.540 fused_ordering(149) 00:12:14.540 fused_ordering(150) 00:12:14.540 fused_ordering(151) 00:12:14.540 fused_ordering(152) 00:12:14.540 fused_ordering(153) 00:12:14.540 fused_ordering(154) 00:12:14.540 fused_ordering(155) 00:12:14.540 fused_ordering(156) 00:12:14.540 fused_ordering(157) 00:12:14.540 fused_ordering(158) 00:12:14.540 fused_ordering(159) 00:12:14.540 fused_ordering(160) 00:12:14.540 fused_ordering(161) 00:12:14.540 fused_ordering(162) 00:12:14.540 fused_ordering(163) 00:12:14.540 fused_ordering(164) 00:12:14.540 fused_ordering(165) 00:12:14.540 fused_ordering(166) 00:12:14.540 fused_ordering(167) 00:12:14.540 fused_ordering(168) 00:12:14.540 fused_ordering(169) 00:12:14.540 fused_ordering(170) 00:12:14.540 fused_ordering(171) 00:12:14.540 fused_ordering(172) 00:12:14.540 fused_ordering(173) 00:12:14.540 fused_ordering(174) 00:12:14.540 fused_ordering(175) 00:12:14.540 fused_ordering(176) 00:12:14.540 fused_ordering(177) 00:12:14.540 fused_ordering(178) 00:12:14.540 fused_ordering(179) 00:12:14.540 fused_ordering(180) 00:12:14.540 fused_ordering(181) 00:12:14.540 fused_ordering(182) 00:12:14.540 fused_ordering(183) 00:12:14.540 fused_ordering(184) 00:12:14.540 fused_ordering(185) 00:12:14.540 fused_ordering(186) 00:12:14.540 fused_ordering(187) 00:12:14.540 fused_ordering(188) 00:12:14.540 fused_ordering(189) 00:12:14.540 fused_ordering(190) 00:12:14.540 fused_ordering(191) 00:12:14.540 fused_ordering(192) 00:12:14.540 fused_ordering(193) 00:12:14.540 fused_ordering(194) 00:12:14.540 fused_ordering(195) 00:12:14.540 fused_ordering(196) 00:12:14.540 fused_ordering(197) 00:12:14.540 fused_ordering(198) 00:12:14.540 fused_ordering(199) 00:12:14.540 fused_ordering(200) 00:12:14.540 fused_ordering(201) 00:12:14.540 fused_ordering(202) 00:12:14.540 fused_ordering(203) 00:12:14.540 fused_ordering(204) 00:12:14.540 fused_ordering(205) 00:12:15.110 fused_ordering(206) 00:12:15.110 fused_ordering(207) 00:12:15.110 fused_ordering(208) 00:12:15.110 fused_ordering(209) 00:12:15.110 fused_ordering(210) 00:12:15.110 fused_ordering(211) 00:12:15.110 fused_ordering(212) 00:12:15.110 fused_ordering(213) 00:12:15.110 fused_ordering(214) 00:12:15.110 fused_ordering(215) 00:12:15.110 fused_ordering(216) 00:12:15.110 fused_ordering(217) 00:12:15.110 fused_ordering(218) 00:12:15.110 fused_ordering(219) 00:12:15.110 fused_ordering(220) 00:12:15.110 fused_ordering(221) 00:12:15.110 fused_ordering(222) 00:12:15.110 fused_ordering(223) 00:12:15.110 fused_ordering(224) 00:12:15.110 fused_ordering(225) 00:12:15.110 fused_ordering(226) 00:12:15.110 fused_ordering(227) 00:12:15.110 fused_ordering(228) 00:12:15.110 fused_ordering(229) 00:12:15.110 fused_ordering(230) 00:12:15.110 fused_ordering(231) 00:12:15.110 fused_ordering(232) 00:12:15.110 fused_ordering(233) 00:12:15.110 fused_ordering(234) 00:12:15.110 fused_ordering(235) 00:12:15.110 fused_ordering(236) 00:12:15.110 fused_ordering(237) 00:12:15.110 fused_ordering(238) 00:12:15.110 fused_ordering(239) 00:12:15.110 fused_ordering(240) 00:12:15.110 fused_ordering(241) 00:12:15.110 fused_ordering(242) 00:12:15.110 fused_ordering(243) 00:12:15.110 fused_ordering(244) 00:12:15.110 fused_ordering(245) 00:12:15.110 fused_ordering(246) 00:12:15.110 fused_ordering(247) 00:12:15.110 fused_ordering(248) 00:12:15.110 fused_ordering(249) 00:12:15.110 fused_ordering(250) 00:12:15.110 fused_ordering(251) 00:12:15.110 fused_ordering(252) 00:12:15.110 fused_ordering(253) 00:12:15.110 fused_ordering(254) 00:12:15.110 fused_ordering(255) 00:12:15.110 fused_ordering(256) 00:12:15.110 fused_ordering(257) 00:12:15.110 fused_ordering(258) 00:12:15.110 fused_ordering(259) 00:12:15.110 fused_ordering(260) 00:12:15.110 fused_ordering(261) 00:12:15.110 fused_ordering(262) 00:12:15.110 fused_ordering(263) 00:12:15.110 fused_ordering(264) 00:12:15.110 fused_ordering(265) 00:12:15.110 fused_ordering(266) 00:12:15.110 fused_ordering(267) 00:12:15.110 fused_ordering(268) 00:12:15.110 fused_ordering(269) 00:12:15.110 fused_ordering(270) 00:12:15.110 fused_ordering(271) 00:12:15.110 fused_ordering(272) 00:12:15.110 fused_ordering(273) 00:12:15.110 fused_ordering(274) 00:12:15.110 fused_ordering(275) 00:12:15.110 fused_ordering(276) 00:12:15.110 fused_ordering(277) 00:12:15.110 fused_ordering(278) 00:12:15.110 fused_ordering(279) 00:12:15.110 fused_ordering(280) 00:12:15.110 fused_ordering(281) 00:12:15.110 fused_ordering(282) 00:12:15.110 fused_ordering(283) 00:12:15.110 fused_ordering(284) 00:12:15.110 fused_ordering(285) 00:12:15.110 fused_ordering(286) 00:12:15.110 fused_ordering(287) 00:12:15.110 fused_ordering(288) 00:12:15.110 fused_ordering(289) 00:12:15.110 fused_ordering(290) 00:12:15.110 fused_ordering(291) 00:12:15.110 fused_ordering(292) 00:12:15.110 fused_ordering(293) 00:12:15.110 fused_ordering(294) 00:12:15.110 fused_ordering(295) 00:12:15.110 fused_ordering(296) 00:12:15.110 fused_ordering(297) 00:12:15.110 fused_ordering(298) 00:12:15.110 fused_ordering(299) 00:12:15.110 fused_ordering(300) 00:12:15.110 fused_ordering(301) 00:12:15.110 fused_ordering(302) 00:12:15.110 fused_ordering(303) 00:12:15.110 fused_ordering(304) 00:12:15.110 fused_ordering(305) 00:12:15.110 fused_ordering(306) 00:12:15.110 fused_ordering(307) 00:12:15.110 fused_ordering(308) 00:12:15.110 fused_ordering(309) 00:12:15.110 fused_ordering(310) 00:12:15.110 fused_ordering(311) 00:12:15.110 fused_ordering(312) 00:12:15.110 fused_ordering(313) 00:12:15.110 fused_ordering(314) 00:12:15.110 fused_ordering(315) 00:12:15.110 fused_ordering(316) 00:12:15.110 fused_ordering(317) 00:12:15.110 fused_ordering(318) 00:12:15.110 fused_ordering(319) 00:12:15.110 fused_ordering(320) 00:12:15.110 fused_ordering(321) 00:12:15.110 fused_ordering(322) 00:12:15.110 fused_ordering(323) 00:12:15.110 fused_ordering(324) 00:12:15.110 fused_ordering(325) 00:12:15.110 fused_ordering(326) 00:12:15.110 fused_ordering(327) 00:12:15.110 fused_ordering(328) 00:12:15.110 fused_ordering(329) 00:12:15.110 fused_ordering(330) 00:12:15.110 fused_ordering(331) 00:12:15.110 fused_ordering(332) 00:12:15.110 fused_ordering(333) 00:12:15.110 fused_ordering(334) 00:12:15.110 fused_ordering(335) 00:12:15.110 fused_ordering(336) 00:12:15.110 fused_ordering(337) 00:12:15.110 fused_ordering(338) 00:12:15.110 fused_ordering(339) 00:12:15.110 fused_ordering(340) 00:12:15.110 fused_ordering(341) 00:12:15.110 fused_ordering(342) 00:12:15.110 fused_ordering(343) 00:12:15.110 fused_ordering(344) 00:12:15.110 fused_ordering(345) 00:12:15.110 fused_ordering(346) 00:12:15.110 fused_ordering(347) 00:12:15.110 fused_ordering(348) 00:12:15.110 fused_ordering(349) 00:12:15.110 fused_ordering(350) 00:12:15.110 fused_ordering(351) 00:12:15.110 fused_ordering(352) 00:12:15.110 fused_ordering(353) 00:12:15.110 fused_ordering(354) 00:12:15.110 fused_ordering(355) 00:12:15.110 fused_ordering(356) 00:12:15.110 fused_ordering(357) 00:12:15.110 fused_ordering(358) 00:12:15.110 fused_ordering(359) 00:12:15.110 fused_ordering(360) 00:12:15.110 fused_ordering(361) 00:12:15.110 fused_ordering(362) 00:12:15.110 fused_ordering(363) 00:12:15.110 fused_ordering(364) 00:12:15.110 fused_ordering(365) 00:12:15.110 fused_ordering(366) 00:12:15.110 fused_ordering(367) 00:12:15.110 fused_ordering(368) 00:12:15.110 fused_ordering(369) 00:12:15.110 fused_ordering(370) 00:12:15.110 fused_ordering(371) 00:12:15.110 fused_ordering(372) 00:12:15.110 fused_ordering(373) 00:12:15.110 fused_ordering(374) 00:12:15.110 fused_ordering(375) 00:12:15.110 fused_ordering(376) 00:12:15.110 fused_ordering(377) 00:12:15.110 fused_ordering(378) 00:12:15.110 fused_ordering(379) 00:12:15.110 fused_ordering(380) 00:12:15.110 fused_ordering(381) 00:12:15.110 fused_ordering(382) 00:12:15.110 fused_ordering(383) 00:12:15.110 fused_ordering(384) 00:12:15.110 fused_ordering(385) 00:12:15.110 fused_ordering(386) 00:12:15.110 fused_ordering(387) 00:12:15.110 fused_ordering(388) 00:12:15.110 fused_ordering(389) 00:12:15.110 fused_ordering(390) 00:12:15.110 fused_ordering(391) 00:12:15.110 fused_ordering(392) 00:12:15.110 fused_ordering(393) 00:12:15.110 fused_ordering(394) 00:12:15.110 fused_ordering(395) 00:12:15.110 fused_ordering(396) 00:12:15.110 fused_ordering(397) 00:12:15.110 fused_ordering(398) 00:12:15.110 fused_ordering(399) 00:12:15.110 fused_ordering(400) 00:12:15.110 fused_ordering(401) 00:12:15.110 fused_ordering(402) 00:12:15.110 fused_ordering(403) 00:12:15.111 fused_ordering(404) 00:12:15.111 fused_ordering(405) 00:12:15.111 fused_ordering(406) 00:12:15.111 fused_ordering(407) 00:12:15.111 fused_ordering(408) 00:12:15.111 fused_ordering(409) 00:12:15.111 fused_ordering(410) 00:12:15.371 fused_ordering(411) 00:12:15.371 fused_ordering(412) 00:12:15.371 fused_ordering(413) 00:12:15.371 fused_ordering(414) 00:12:15.371 fused_ordering(415) 00:12:15.371 fused_ordering(416) 00:12:15.371 fused_ordering(417) 00:12:15.371 fused_ordering(418) 00:12:15.371 fused_ordering(419) 00:12:15.371 fused_ordering(420) 00:12:15.371 fused_ordering(421) 00:12:15.371 fused_ordering(422) 00:12:15.371 fused_ordering(423) 00:12:15.371 fused_ordering(424) 00:12:15.371 fused_ordering(425) 00:12:15.371 fused_ordering(426) 00:12:15.371 fused_ordering(427) 00:12:15.371 fused_ordering(428) 00:12:15.371 fused_ordering(429) 00:12:15.371 fused_ordering(430) 00:12:15.371 fused_ordering(431) 00:12:15.371 fused_ordering(432) 00:12:15.371 fused_ordering(433) 00:12:15.371 fused_ordering(434) 00:12:15.371 fused_ordering(435) 00:12:15.371 fused_ordering(436) 00:12:15.371 fused_ordering(437) 00:12:15.371 fused_ordering(438) 00:12:15.371 fused_ordering(439) 00:12:15.371 fused_ordering(440) 00:12:15.371 fused_ordering(441) 00:12:15.371 fused_ordering(442) 00:12:15.371 fused_ordering(443) 00:12:15.371 fused_ordering(444) 00:12:15.372 fused_ordering(445) 00:12:15.372 fused_ordering(446) 00:12:15.372 fused_ordering(447) 00:12:15.372 fused_ordering(448) 00:12:15.372 fused_ordering(449) 00:12:15.372 fused_ordering(450) 00:12:15.372 fused_ordering(451) 00:12:15.372 fused_ordering(452) 00:12:15.372 fused_ordering(453) 00:12:15.372 fused_ordering(454) 00:12:15.372 fused_ordering(455) 00:12:15.372 fused_ordering(456) 00:12:15.372 fused_ordering(457) 00:12:15.372 fused_ordering(458) 00:12:15.372 fused_ordering(459) 00:12:15.372 fused_ordering(460) 00:12:15.372 fused_ordering(461) 00:12:15.372 fused_ordering(462) 00:12:15.372 fused_ordering(463) 00:12:15.372 fused_ordering(464) 00:12:15.372 fused_ordering(465) 00:12:15.372 fused_ordering(466) 00:12:15.372 fused_ordering(467) 00:12:15.372 fused_ordering(468) 00:12:15.372 fused_ordering(469) 00:12:15.372 fused_ordering(470) 00:12:15.372 fused_ordering(471) 00:12:15.372 fused_ordering(472) 00:12:15.372 fused_ordering(473) 00:12:15.372 fused_ordering(474) 00:12:15.372 fused_ordering(475) 00:12:15.372 fused_ordering(476) 00:12:15.372 fused_ordering(477) 00:12:15.372 fused_ordering(478) 00:12:15.372 fused_ordering(479) 00:12:15.372 fused_ordering(480) 00:12:15.372 fused_ordering(481) 00:12:15.372 fused_ordering(482) 00:12:15.372 fused_ordering(483) 00:12:15.372 fused_ordering(484) 00:12:15.372 fused_ordering(485) 00:12:15.372 fused_ordering(486) 00:12:15.372 fused_ordering(487) 00:12:15.372 fused_ordering(488) 00:12:15.372 fused_ordering(489) 00:12:15.372 fused_ordering(490) 00:12:15.372 fused_ordering(491) 00:12:15.372 fused_ordering(492) 00:12:15.372 fused_ordering(493) 00:12:15.372 fused_ordering(494) 00:12:15.372 fused_ordering(495) 00:12:15.372 fused_ordering(496) 00:12:15.372 fused_ordering(497) 00:12:15.372 fused_ordering(498) 00:12:15.372 fused_ordering(499) 00:12:15.372 fused_ordering(500) 00:12:15.372 fused_ordering(501) 00:12:15.372 fused_ordering(502) 00:12:15.372 fused_ordering(503) 00:12:15.372 fused_ordering(504) 00:12:15.372 fused_ordering(505) 00:12:15.372 fused_ordering(506) 00:12:15.372 fused_ordering(507) 00:12:15.372 fused_ordering(508) 00:12:15.372 fused_ordering(509) 00:12:15.372 fused_ordering(510) 00:12:15.372 fused_ordering(511) 00:12:15.372 fused_ordering(512) 00:12:15.372 fused_ordering(513) 00:12:15.372 fused_ordering(514) 00:12:15.372 fused_ordering(515) 00:12:15.372 fused_ordering(516) 00:12:15.372 fused_ordering(517) 00:12:15.372 fused_ordering(518) 00:12:15.372 fused_ordering(519) 00:12:15.372 fused_ordering(520) 00:12:15.372 fused_ordering(521) 00:12:15.372 fused_ordering(522) 00:12:15.372 fused_ordering(523) 00:12:15.372 fused_ordering(524) 00:12:15.372 fused_ordering(525) 00:12:15.372 fused_ordering(526) 00:12:15.372 fused_ordering(527) 00:12:15.372 fused_ordering(528) 00:12:15.372 fused_ordering(529) 00:12:15.372 fused_ordering(530) 00:12:15.372 fused_ordering(531) 00:12:15.372 fused_ordering(532) 00:12:15.372 fused_ordering(533) 00:12:15.372 fused_ordering(534) 00:12:15.372 fused_ordering(535) 00:12:15.372 fused_ordering(536) 00:12:15.372 fused_ordering(537) 00:12:15.372 fused_ordering(538) 00:12:15.372 fused_ordering(539) 00:12:15.372 fused_ordering(540) 00:12:15.372 fused_ordering(541) 00:12:15.372 fused_ordering(542) 00:12:15.372 fused_ordering(543) 00:12:15.372 fused_ordering(544) 00:12:15.372 fused_ordering(545) 00:12:15.372 fused_ordering(546) 00:12:15.372 fused_ordering(547) 00:12:15.372 fused_ordering(548) 00:12:15.372 fused_ordering(549) 00:12:15.372 fused_ordering(550) 00:12:15.372 fused_ordering(551) 00:12:15.372 fused_ordering(552) 00:12:15.372 fused_ordering(553) 00:12:15.372 fused_ordering(554) 00:12:15.372 fused_ordering(555) 00:12:15.372 fused_ordering(556) 00:12:15.372 fused_ordering(557) 00:12:15.372 fused_ordering(558) 00:12:15.372 fused_ordering(559) 00:12:15.372 fused_ordering(560) 00:12:15.372 fused_ordering(561) 00:12:15.372 fused_ordering(562) 00:12:15.372 fused_ordering(563) 00:12:15.372 fused_ordering(564) 00:12:15.372 fused_ordering(565) 00:12:15.372 fused_ordering(566) 00:12:15.372 fused_ordering(567) 00:12:15.372 fused_ordering(568) 00:12:15.372 fused_ordering(569) 00:12:15.372 fused_ordering(570) 00:12:15.372 fused_ordering(571) 00:12:15.372 fused_ordering(572) 00:12:15.372 fused_ordering(573) 00:12:15.372 fused_ordering(574) 00:12:15.372 fused_ordering(575) 00:12:15.372 fused_ordering(576) 00:12:15.372 fused_ordering(577) 00:12:15.372 fused_ordering(578) 00:12:15.372 fused_ordering(579) 00:12:15.372 fused_ordering(580) 00:12:15.372 fused_ordering(581) 00:12:15.372 fused_ordering(582) 00:12:15.372 fused_ordering(583) 00:12:15.372 fused_ordering(584) 00:12:15.372 fused_ordering(585) 00:12:15.372 fused_ordering(586) 00:12:15.372 fused_ordering(587) 00:12:15.372 fused_ordering(588) 00:12:15.372 fused_ordering(589) 00:12:15.372 fused_ordering(590) 00:12:15.372 fused_ordering(591) 00:12:15.372 fused_ordering(592) 00:12:15.372 fused_ordering(593) 00:12:15.372 fused_ordering(594) 00:12:15.372 fused_ordering(595) 00:12:15.372 fused_ordering(596) 00:12:15.372 fused_ordering(597) 00:12:15.372 fused_ordering(598) 00:12:15.372 fused_ordering(599) 00:12:15.372 fused_ordering(600) 00:12:15.372 fused_ordering(601) 00:12:15.372 fused_ordering(602) 00:12:15.372 fused_ordering(603) 00:12:15.372 fused_ordering(604) 00:12:15.372 fused_ordering(605) 00:12:15.372 fused_ordering(606) 00:12:15.372 fused_ordering(607) 00:12:15.372 fused_ordering(608) 00:12:15.372 fused_ordering(609) 00:12:15.372 fused_ordering(610) 00:12:15.372 fused_ordering(611) 00:12:15.372 fused_ordering(612) 00:12:15.372 fused_ordering(613) 00:12:15.372 fused_ordering(614) 00:12:15.372 fused_ordering(615) 00:12:15.944 fused_ordering(616) 00:12:15.944 fused_ordering(617) 00:12:15.944 fused_ordering(618) 00:12:15.944 fused_ordering(619) 00:12:15.944 fused_ordering(620) 00:12:15.944 fused_ordering(621) 00:12:15.944 fused_ordering(622) 00:12:15.944 fused_ordering(623) 00:12:15.944 fused_ordering(624) 00:12:15.944 fused_ordering(625) 00:12:15.944 fused_ordering(626) 00:12:15.944 fused_ordering(627) 00:12:15.944 fused_ordering(628) 00:12:15.944 fused_ordering(629) 00:12:15.944 fused_ordering(630) 00:12:15.944 fused_ordering(631) 00:12:15.944 fused_ordering(632) 00:12:15.944 fused_ordering(633) 00:12:15.944 fused_ordering(634) 00:12:15.944 fused_ordering(635) 00:12:15.944 fused_ordering(636) 00:12:15.944 fused_ordering(637) 00:12:15.944 fused_ordering(638) 00:12:15.944 fused_ordering(639) 00:12:15.944 fused_ordering(640) 00:12:15.944 fused_ordering(641) 00:12:15.944 fused_ordering(642) 00:12:15.944 fused_ordering(643) 00:12:15.944 fused_ordering(644) 00:12:15.944 fused_ordering(645) 00:12:15.944 fused_ordering(646) 00:12:15.944 fused_ordering(647) 00:12:15.944 fused_ordering(648) 00:12:15.944 fused_ordering(649) 00:12:15.944 fused_ordering(650) 00:12:15.944 fused_ordering(651) 00:12:15.944 fused_ordering(652) 00:12:15.944 fused_ordering(653) 00:12:15.944 fused_ordering(654) 00:12:15.944 fused_ordering(655) 00:12:15.944 fused_ordering(656) 00:12:15.944 fused_ordering(657) 00:12:15.944 fused_ordering(658) 00:12:15.944 fused_ordering(659) 00:12:15.944 fused_ordering(660) 00:12:15.944 fused_ordering(661) 00:12:15.944 fused_ordering(662) 00:12:15.944 fused_ordering(663) 00:12:15.944 fused_ordering(664) 00:12:15.944 fused_ordering(665) 00:12:15.944 fused_ordering(666) 00:12:15.944 fused_ordering(667) 00:12:15.944 fused_ordering(668) 00:12:15.944 fused_ordering(669) 00:12:15.944 fused_ordering(670) 00:12:15.944 fused_ordering(671) 00:12:15.944 fused_ordering(672) 00:12:15.944 fused_ordering(673) 00:12:15.944 fused_ordering(674) 00:12:15.944 fused_ordering(675) 00:12:15.944 fused_ordering(676) 00:12:15.944 fused_ordering(677) 00:12:15.944 fused_ordering(678) 00:12:15.944 fused_ordering(679) 00:12:15.944 fused_ordering(680) 00:12:15.944 fused_ordering(681) 00:12:15.944 fused_ordering(682) 00:12:15.944 fused_ordering(683) 00:12:15.944 fused_ordering(684) 00:12:15.944 fused_ordering(685) 00:12:15.944 fused_ordering(686) 00:12:15.944 fused_ordering(687) 00:12:15.944 fused_ordering(688) 00:12:15.944 fused_ordering(689) 00:12:15.944 fused_ordering(690) 00:12:15.944 fused_ordering(691) 00:12:15.944 fused_ordering(692) 00:12:15.944 fused_ordering(693) 00:12:15.944 fused_ordering(694) 00:12:15.944 fused_ordering(695) 00:12:15.944 fused_ordering(696) 00:12:15.944 fused_ordering(697) 00:12:15.944 fused_ordering(698) 00:12:15.944 fused_ordering(699) 00:12:15.944 fused_ordering(700) 00:12:15.944 fused_ordering(701) 00:12:15.944 fused_ordering(702) 00:12:15.944 fused_ordering(703) 00:12:15.944 fused_ordering(704) 00:12:15.944 fused_ordering(705) 00:12:15.944 fused_ordering(706) 00:12:15.944 fused_ordering(707) 00:12:15.944 fused_ordering(708) 00:12:15.944 fused_ordering(709) 00:12:15.944 fused_ordering(710) 00:12:15.944 fused_ordering(711) 00:12:15.944 fused_ordering(712) 00:12:15.944 fused_ordering(713) 00:12:15.944 fused_ordering(714) 00:12:15.944 fused_ordering(715) 00:12:15.944 fused_ordering(716) 00:12:15.944 fused_ordering(717) 00:12:15.944 fused_ordering(718) 00:12:15.944 fused_ordering(719) 00:12:15.944 fused_ordering(720) 00:12:15.944 fused_ordering(721) 00:12:15.944 fused_ordering(722) 00:12:15.944 fused_ordering(723) 00:12:15.944 fused_ordering(724) 00:12:15.944 fused_ordering(725) 00:12:15.944 fused_ordering(726) 00:12:15.944 fused_ordering(727) 00:12:15.944 fused_ordering(728) 00:12:15.944 fused_ordering(729) 00:12:15.944 fused_ordering(730) 00:12:15.944 fused_ordering(731) 00:12:15.944 fused_ordering(732) 00:12:15.944 fused_ordering(733) 00:12:15.944 fused_ordering(734) 00:12:15.944 fused_ordering(735) 00:12:15.944 fused_ordering(736) 00:12:15.944 fused_ordering(737) 00:12:15.944 fused_ordering(738) 00:12:15.944 fused_ordering(739) 00:12:15.944 fused_ordering(740) 00:12:15.944 fused_ordering(741) 00:12:15.944 fused_ordering(742) 00:12:15.944 fused_ordering(743) 00:12:15.944 fused_ordering(744) 00:12:15.944 fused_ordering(745) 00:12:15.944 fused_ordering(746) 00:12:15.944 fused_ordering(747) 00:12:15.944 fused_ordering(748) 00:12:15.944 fused_ordering(749) 00:12:15.944 fused_ordering(750) 00:12:15.944 fused_ordering(751) 00:12:15.944 fused_ordering(752) 00:12:15.944 fused_ordering(753) 00:12:15.944 fused_ordering(754) 00:12:15.944 fused_ordering(755) 00:12:15.944 fused_ordering(756) 00:12:15.944 fused_ordering(757) 00:12:15.944 fused_ordering(758) 00:12:15.944 fused_ordering(759) 00:12:15.944 fused_ordering(760) 00:12:15.944 fused_ordering(761) 00:12:15.944 fused_ordering(762) 00:12:15.944 fused_ordering(763) 00:12:15.944 fused_ordering(764) 00:12:15.944 fused_ordering(765) 00:12:15.944 fused_ordering(766) 00:12:15.944 fused_ordering(767) 00:12:15.944 fused_ordering(768) 00:12:15.944 fused_ordering(769) 00:12:15.944 fused_ordering(770) 00:12:15.944 fused_ordering(771) 00:12:15.944 fused_ordering(772) 00:12:15.944 fused_ordering(773) 00:12:15.944 fused_ordering(774) 00:12:15.944 fused_ordering(775) 00:12:15.944 fused_ordering(776) 00:12:15.944 fused_ordering(777) 00:12:15.944 fused_ordering(778) 00:12:15.944 fused_ordering(779) 00:12:15.944 fused_ordering(780) 00:12:15.944 fused_ordering(781) 00:12:15.944 fused_ordering(782) 00:12:15.944 fused_ordering(783) 00:12:15.944 fused_ordering(784) 00:12:15.944 fused_ordering(785) 00:12:15.944 fused_ordering(786) 00:12:15.944 fused_ordering(787) 00:12:15.944 fused_ordering(788) 00:12:15.944 fused_ordering(789) 00:12:15.944 fused_ordering(790) 00:12:15.944 fused_ordering(791) 00:12:15.944 fused_ordering(792) 00:12:15.944 fused_ordering(793) 00:12:15.944 fused_ordering(794) 00:12:15.944 fused_ordering(795) 00:12:15.944 fused_ordering(796) 00:12:15.944 fused_ordering(797) 00:12:15.944 fused_ordering(798) 00:12:15.944 fused_ordering(799) 00:12:15.944 fused_ordering(800) 00:12:15.944 fused_ordering(801) 00:12:15.944 fused_ordering(802) 00:12:15.944 fused_ordering(803) 00:12:15.944 fused_ordering(804) 00:12:15.944 fused_ordering(805) 00:12:15.944 fused_ordering(806) 00:12:15.944 fused_ordering(807) 00:12:15.944 fused_ordering(808) 00:12:15.944 fused_ordering(809) 00:12:15.944 fused_ordering(810) 00:12:15.944 fused_ordering(811) 00:12:15.944 fused_ordering(812) 00:12:15.944 fused_ordering(813) 00:12:15.944 fused_ordering(814) 00:12:15.944 fused_ordering(815) 00:12:15.944 fused_ordering(816) 00:12:15.944 fused_ordering(817) 00:12:15.944 fused_ordering(818) 00:12:15.944 fused_ordering(819) 00:12:15.945 fused_ordering(820) 00:12:16.890 fused_ordering(821) 00:12:16.890 fused_ordering(822) 00:12:16.890 fused_ordering(823) 00:12:16.890 fused_ordering(824) 00:12:16.890 fused_ordering(825) 00:12:16.890 fused_ordering(826) 00:12:16.890 fused_ordering(827) 00:12:16.890 fused_ordering(828) 00:12:16.890 fused_ordering(829) 00:12:16.890 fused_ordering(830) 00:12:16.890 fused_ordering(831) 00:12:16.890 fused_ordering(832) 00:12:16.890 fused_ordering(833) 00:12:16.890 fused_ordering(834) 00:12:16.890 fused_ordering(835) 00:12:16.890 fused_ordering(836) 00:12:16.890 fused_ordering(837) 00:12:16.890 fused_ordering(838) 00:12:16.890 fused_ordering(839) 00:12:16.890 fused_ordering(840) 00:12:16.890 fused_ordering(841) 00:12:16.890 fused_ordering(842) 00:12:16.890 fused_ordering(843) 00:12:16.890 fused_ordering(844) 00:12:16.890 fused_ordering(845) 00:12:16.890 fused_ordering(846) 00:12:16.890 fused_ordering(847) 00:12:16.890 fused_ordering(848) 00:12:16.890 fused_ordering(849) 00:12:16.890 fused_ordering(850) 00:12:16.890 fused_ordering(851) 00:12:16.890 fused_ordering(852) 00:12:16.890 fused_ordering(853) 00:12:16.890 fused_ordering(854) 00:12:16.890 fused_ordering(855) 00:12:16.890 fused_ordering(856) 00:12:16.890 fused_ordering(857) 00:12:16.890 fused_ordering(858) 00:12:16.890 fused_ordering(859) 00:12:16.890 fused_ordering(860) 00:12:16.890 fused_ordering(861) 00:12:16.890 fused_ordering(862) 00:12:16.890 fused_ordering(863) 00:12:16.890 fused_ordering(864) 00:12:16.890 fused_ordering(865) 00:12:16.890 fused_ordering(866) 00:12:16.890 fused_ordering(867) 00:12:16.890 fused_ordering(868) 00:12:16.890 fused_ordering(869) 00:12:16.890 fused_ordering(870) 00:12:16.890 fused_ordering(871) 00:12:16.890 fused_ordering(872) 00:12:16.890 fused_ordering(873) 00:12:16.890 fused_ordering(874) 00:12:16.890 fused_ordering(875) 00:12:16.890 fused_ordering(876) 00:12:16.890 fused_ordering(877) 00:12:16.890 fused_ordering(878) 00:12:16.890 fused_ordering(879) 00:12:16.890 fused_ordering(880) 00:12:16.890 fused_ordering(881) 00:12:16.890 fused_ordering(882) 00:12:16.890 fused_ordering(883) 00:12:16.890 fused_ordering(884) 00:12:16.890 fused_ordering(885) 00:12:16.890 fused_ordering(886) 00:12:16.890 fused_ordering(887) 00:12:16.890 fused_ordering(888) 00:12:16.890 fused_ordering(889) 00:12:16.890 fused_ordering(890) 00:12:16.890 fused_ordering(891) 00:12:16.890 fused_ordering(892) 00:12:16.890 fused_ordering(893) 00:12:16.890 fused_ordering(894) 00:12:16.890 fused_ordering(895) 00:12:16.890 fused_ordering(896) 00:12:16.890 fused_ordering(897) 00:12:16.890 fused_ordering(898) 00:12:16.890 fused_ordering(899) 00:12:16.890 fused_ordering(900) 00:12:16.890 fused_ordering(901) 00:12:16.890 fused_ordering(902) 00:12:16.890 fused_ordering(903) 00:12:16.890 fused_ordering(904) 00:12:16.890 fused_ordering(905) 00:12:16.890 fused_ordering(906) 00:12:16.890 fused_ordering(907) 00:12:16.890 fused_ordering(908) 00:12:16.890 fused_ordering(909) 00:12:16.890 fused_ordering(910) 00:12:16.890 fused_ordering(911) 00:12:16.890 fused_ordering(912) 00:12:16.890 fused_ordering(913) 00:12:16.890 fused_ordering(914) 00:12:16.890 fused_ordering(915) 00:12:16.890 fused_ordering(916) 00:12:16.890 fused_ordering(917) 00:12:16.890 fused_ordering(918) 00:12:16.890 fused_ordering(919) 00:12:16.890 fused_ordering(920) 00:12:16.890 fused_ordering(921) 00:12:16.890 fused_ordering(922) 00:12:16.890 fused_ordering(923) 00:12:16.890 fused_ordering(924) 00:12:16.890 fused_ordering(925) 00:12:16.890 fused_ordering(926) 00:12:16.890 fused_ordering(927) 00:12:16.890 fused_ordering(928) 00:12:16.890 fused_ordering(929) 00:12:16.890 fused_ordering(930) 00:12:16.890 fused_ordering(931) 00:12:16.890 fused_ordering(932) 00:12:16.890 fused_ordering(933) 00:12:16.890 fused_ordering(934) 00:12:16.890 fused_ordering(935) 00:12:16.890 fused_ordering(936) 00:12:16.890 fused_ordering(937) 00:12:16.890 fused_ordering(938) 00:12:16.890 fused_ordering(939) 00:12:16.890 fused_ordering(940) 00:12:16.890 fused_ordering(941) 00:12:16.890 fused_ordering(942) 00:12:16.890 fused_ordering(943) 00:12:16.890 fused_ordering(944) 00:12:16.890 fused_ordering(945) 00:12:16.890 fused_ordering(946) 00:12:16.890 fused_ordering(947) 00:12:16.890 fused_ordering(948) 00:12:16.890 fused_ordering(949) 00:12:16.890 fused_ordering(950) 00:12:16.890 fused_ordering(951) 00:12:16.890 fused_ordering(952) 00:12:16.890 fused_ordering(953) 00:12:16.890 fused_ordering(954) 00:12:16.890 fused_ordering(955) 00:12:16.890 fused_ordering(956) 00:12:16.890 fused_ordering(957) 00:12:16.890 fused_ordering(958) 00:12:16.890 fused_ordering(959) 00:12:16.890 fused_ordering(960) 00:12:16.890 fused_ordering(961) 00:12:16.890 fused_ordering(962) 00:12:16.890 fused_ordering(963) 00:12:16.890 fused_ordering(964) 00:12:16.890 fused_ordering(965) 00:12:16.890 fused_ordering(966) 00:12:16.890 fused_ordering(967) 00:12:16.890 fused_ordering(968) 00:12:16.890 fused_ordering(969) 00:12:16.890 fused_ordering(970) 00:12:16.890 fused_ordering(971) 00:12:16.890 fused_ordering(972) 00:12:16.890 fused_ordering(973) 00:12:16.890 fused_ordering(974) 00:12:16.890 fused_ordering(975) 00:12:16.890 fused_ordering(976) 00:12:16.890 fused_ordering(977) 00:12:16.890 fused_ordering(978) 00:12:16.890 fused_ordering(979) 00:12:16.890 fused_ordering(980) 00:12:16.890 fused_ordering(981) 00:12:16.890 fused_ordering(982) 00:12:16.890 fused_ordering(983) 00:12:16.890 fused_ordering(984) 00:12:16.890 fused_ordering(985) 00:12:16.890 fused_ordering(986) 00:12:16.890 fused_ordering(987) 00:12:16.890 fused_ordering(988) 00:12:16.890 fused_ordering(989) 00:12:16.890 fused_ordering(990) 00:12:16.890 fused_ordering(991) 00:12:16.890 fused_ordering(992) 00:12:16.890 fused_ordering(993) 00:12:16.890 fused_ordering(994) 00:12:16.890 fused_ordering(995) 00:12:16.890 fused_ordering(996) 00:12:16.890 fused_ordering(997) 00:12:16.890 fused_ordering(998) 00:12:16.890 fused_ordering(999) 00:12:16.890 fused_ordering(1000) 00:12:16.890 fused_ordering(1001) 00:12:16.890 fused_ordering(1002) 00:12:16.890 fused_ordering(1003) 00:12:16.890 fused_ordering(1004) 00:12:16.890 fused_ordering(1005) 00:12:16.890 fused_ordering(1006) 00:12:16.890 fused_ordering(1007) 00:12:16.890 fused_ordering(1008) 00:12:16.890 fused_ordering(1009) 00:12:16.890 fused_ordering(1010) 00:12:16.890 fused_ordering(1011) 00:12:16.890 fused_ordering(1012) 00:12:16.890 fused_ordering(1013) 00:12:16.890 fused_ordering(1014) 00:12:16.890 fused_ordering(1015) 00:12:16.890 fused_ordering(1016) 00:12:16.890 fused_ordering(1017) 00:12:16.890 fused_ordering(1018) 00:12:16.890 fused_ordering(1019) 00:12:16.890 fused_ordering(1020) 00:12:16.890 fused_ordering(1021) 00:12:16.890 fused_ordering(1022) 00:12:16.890 fused_ordering(1023) 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.890 rmmod nvme_tcp 00:12:16.890 rmmod nvme_fabrics 00:12:16.890 rmmod nvme_keyring 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1315761 ']' 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1315761 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1315761 ']' 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1315761 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1315761 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1315761' 00:12:16.890 killing process with pid 1315761 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1315761 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1315761 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.890 19:08:22 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.433 19:08:25 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.433 00:12:19.433 real 0m12.636s 00:12:19.433 user 0m6.969s 00:12:19.433 sys 0m6.660s 00:12:19.433 19:08:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:19.433 19:08:25 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:19.433 ************************************ 00:12:19.433 END TEST nvmf_fused_ordering 00:12:19.433 ************************************ 00:12:19.433 19:08:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:19.433 19:08:25 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:19.433 19:08:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:19.433 19:08:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:19.433 19:08:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:19.433 ************************************ 00:12:19.434 START TEST nvmf_delete_subsystem 00:12:19.434 ************************************ 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:12:19.434 * Looking for test storage... 00:12:19.434 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.434 19:08:25 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:26.076 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:26.076 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.076 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:26.077 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:26.077 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.077 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.497 ms 00:12:26.374 00:12:26.374 --- 10.0.0.2 ping statistics --- 00:12:26.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.374 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:12:26.374 00:12:26.374 --- 10.0.0.1 ping statistics --- 00:12:26.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.374 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1320654 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1320654 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1320654 ']' 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.374 19:08:32 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.374 [2024-07-12 19:08:32.444629] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:12:26.374 [2024-07-12 19:08:32.444693] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.374 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.636 [2024-07-12 19:08:32.517627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:26.636 [2024-07-12 19:08:32.592472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.636 [2024-07-12 19:08:32.592510] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.636 [2024-07-12 19:08:32.592518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.636 [2024-07-12 19:08:32.592524] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.636 [2024-07-12 19:08:32.592530] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.636 [2024-07-12 19:08:32.592671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.636 [2024-07-12 19:08:32.592672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.206 [2024-07-12 19:08:33.263859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.206 [2024-07-12 19:08:33.288047] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:27.206 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.207 NULL1 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.207 Delay0 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1320815 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:27.207 19:08:33 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:27.468 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.468 [2024-07-12 19:08:33.384692] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:29.384 19:08:35 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.384 19:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:29.384 19:08:35 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.384 starting I/O failed: -6 00:12:29.384 Write completed with error (sct=0, sc=8) 00:12:29.384 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 [2024-07-12 19:08:35.468904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307980 is same with the state(5) to be set 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 Read completed with error (sct=0, sc=8) 00:12:29.385 starting I/O failed: -6 00:12:29.385 Write completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Write completed with error (sct=0, sc=8) 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 Read completed with error (sct=0, sc=8) 00:12:29.386 starting I/O failed: -6 00:12:29.386 starting I/O failed: -6 00:12:29.386 starting I/O failed: -6 00:12:29.386 starting I/O failed: -6 00:12:29.386 starting I/O failed: -6 00:12:29.386 starting I/O failed: -6 00:12:29.386 starting I/O failed: -6 00:12:30.328 [2024-07-12 19:08:36.441473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308ac0 is same with the state(5) to be set 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Write completed with error (sct=0, sc=8) 00:12:30.589 Write completed with error (sct=0, sc=8) 00:12:30.589 Write completed with error (sct=0, sc=8) 00:12:30.589 Write completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Write completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.589 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 [2024-07-12 19:08:36.472856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307cd0 is same with the state(5) to be set 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 [2024-07-12 19:08:36.472960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1307630 is same with the state(5) to be set 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 [2024-07-12 19:08:36.474064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f28f000d020 is same with the state(5) to be set 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 Read completed with error (sct=0, sc=8) 00:12:30.590 Write completed with error (sct=0, sc=8) 00:12:30.590 [2024-07-12 19:08:36.474906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f28f000d800 is same with the state(5) to be set 00:12:30.590 Initializing NVMe Controllers 00:12:30.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:30.590 Controller IO queue size 128, less than required. 00:12:30.590 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:30.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:30.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:30.590 Initialization complete. Launching workers. 00:12:30.590 ======================================================== 00:12:30.590 Latency(us) 00:12:30.590 Device Information : IOPS MiB/s Average min max 00:12:30.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.30 0.09 879459.03 247.76 1006337.46 00:12:30.590 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 176.80 0.09 964270.54 403.84 2002067.72 00:12:30.590 ======================================================== 00:12:30.590 Total : 354.10 0.17 921805.14 247.76 2002067.72 00:12:30.590 00:12:30.590 [2024-07-12 19:08:36.475464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1308ac0 (9): Bad file descriptor 00:12:30.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:30.590 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:30.590 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:30.590 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1320815 00:12:30.590 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:30.851 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:30.851 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1320815 00:12:30.851 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1320815) - No such process 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1320815 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1320815 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1320815 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.112 19:08:36 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:31.112 [2024-07-12 19:08:37.004729] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1321490 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1321490 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:31.112 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:31.112 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.112 [2024-07-12 19:08:37.075593] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:31.684 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:31.684 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1321490 00:12:31.684 19:08:37 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:31.945 19:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:31.945 19:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1321490 00:12:31.945 19:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:32.517 19:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:32.517 19:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1321490 00:12:32.517 19:08:38 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:33.088 19:08:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:33.088 19:08:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1321490 00:12:33.088 19:08:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:33.660 19:08:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:33.660 19:08:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1321490 00:12:33.660 19:08:39 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:33.921 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:34.182 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1321490 00:12:34.182 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:34.182 Initializing NVMe Controllers 00:12:34.182 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:34.182 Controller IO queue size 128, less than required. 00:12:34.182 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:34.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:34.182 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:34.182 Initialization complete. Launching workers. 00:12:34.182 ======================================================== 00:12:34.182 Latency(us) 00:12:34.182 Device Information : IOPS MiB/s Average min max 00:12:34.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002129.85 1000268.58 1006748.50 00:12:34.182 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002888.71 1000290.06 1009188.69 00:12:34.182 ======================================================== 00:12:34.182 Total : 256.00 0.12 1002509.28 1000268.58 1009188.69 00:12:34.182 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1321490 00:12:34.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1321490) - No such process 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1321490 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.443 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.443 rmmod nvme_tcp 00:12:34.703 rmmod nvme_fabrics 00:12:34.703 rmmod nvme_keyring 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1320654 ']' 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1320654 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1320654 ']' 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1320654 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1320654 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1320654' 00:12:34.703 killing process with pid 1320654 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1320654 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1320654 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.703 19:08:40 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.251 19:08:42 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.251 00:12:37.251 real 0m17.754s 00:12:37.251 user 0m30.421s 00:12:37.251 sys 0m6.187s 00:12:37.251 19:08:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.251 19:08:42 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:37.251 ************************************ 00:12:37.251 END TEST nvmf_delete_subsystem 00:12:37.251 ************************************ 00:12:37.251 19:08:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:37.251 19:08:42 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:37.251 19:08:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.251 19:08:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.251 19:08:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.251 ************************************ 00:12:37.251 START TEST nvmf_ns_masking 00:12:37.251 ************************************ 00:12:37.251 19:08:42 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:37.251 * Looking for test storage... 00:12:37.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.251 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=6574ecca-6c3b-4abb-baf2-599507976c5e 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=73144f09-0476-490d-af64-c8d7f7cd626d 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=e0f68be7-9d4d-47bd-8957-fccba153b7bf 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.252 19:08:43 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:12:43.844 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:12:43.844 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:12:43.844 Found net devices under 0000:4b:00.0: cvl_0_0 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:43.844 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:12:43.845 Found net devices under 0000:4b:00.1: cvl_0_1 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:43.845 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:44.106 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:44.106 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:44.106 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:44.106 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:44.106 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:44.106 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:44.106 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:44.106 19:08:49 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:44.106 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:44.106 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:44.106 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:44.106 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:44.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:44.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.543 ms 00:12:44.367 00:12:44.367 --- 10.0.0.2 ping statistics --- 00:12:44.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.367 rtt min/avg/max/mdev = 0.543/0.543/0.543/0.000 ms 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:44.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:44.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:12:44.367 00:12:44.367 --- 10.0.0.1 ping statistics --- 00:12:44.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:44.367 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1326489 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1326489 00:12:44.367 19:08:50 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:44.368 19:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1326489 ']' 00:12:44.368 19:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:44.368 19:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:44.368 19:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:44.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:44.368 19:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:44.368 19:08:50 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:44.368 [2024-07-12 19:08:50.401814] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:12:44.368 [2024-07-12 19:08:50.401884] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.368 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.368 [2024-07-12 19:08:50.473858] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:44.628 [2024-07-12 19:08:50.547196] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.628 [2024-07-12 19:08:50.547232] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.628 [2024-07-12 19:08:50.547239] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:44.628 [2024-07-12 19:08:50.547246] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:44.628 [2024-07-12 19:08:50.547251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.628 [2024-07-12 19:08:50.547273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.199 19:08:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:45.199 19:08:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:45.199 19:08:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:45.199 19:08:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:45.199 19:08:51 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:45.199 19:08:51 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.199 19:08:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:45.459 [2024-07-12 19:08:51.362402] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:45.460 19:08:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:45.460 19:08:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:45.460 19:08:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:45.460 Malloc1 00:12:45.460 19:08:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:45.720 Malloc2 00:12:45.720 19:08:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.980 19:08:51 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:45.980 19:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.240 [2024-07-12 19:08:52.163209] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.240 19:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:46.240 19:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e0f68be7-9d4d-47bd-8957-fccba153b7bf -a 10.0.0.2 -s 4420 -i 4 00:12:46.240 19:08:52 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.240 19:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:46.240 19:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.240 19:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:46.240 19:08:52 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:48.784 [ 0]:0x1 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1adc5a10799b4dfe91b0a5c649861b6c 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1adc5a10799b4dfe91b0a5c649861b6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:48.784 [ 0]:0x1 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1adc5a10799b4dfe91b0a5c649861b6c 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1adc5a10799b4dfe91b0a5c649861b6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:48.784 [ 1]:0x2 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:48.784 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83dec1cf8265412a9218952bb2e28495 00:12:48.785 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83dec1cf8265412a9218952bb2e28495 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:48.785 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:48.785 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.785 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.045 19:08:54 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:49.045 19:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:49.045 19:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e0f68be7-9d4d-47bd-8957-fccba153b7bf -a 10.0.0.2 -s 4420 -i 4 00:12:49.306 19:08:55 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:49.306 19:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:49.306 19:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.306 19:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:49.306 19:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:49.306 19:08:55 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:51.219 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.480 [ 0]:0x2 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83dec1cf8265412a9218952bb2e28495 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83dec1cf8265412a9218952bb2e28495 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.480 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:51.742 [ 0]:0x1 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1adc5a10799b4dfe91b0a5c649861b6c 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1adc5a10799b4dfe91b0a5c649861b6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:51.742 [ 1]:0x2 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83dec1cf8265412a9218952bb2e28495 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83dec1cf8265412a9218952bb2e28495 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:51.742 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:52.003 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:52.003 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:52.003 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:52.004 19:08:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:52.004 [ 0]:0x2 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83dec1cf8265412a9218952bb2e28495 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83dec1cf8265412a9218952bb2e28495 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.004 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:52.264 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:52.264 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I e0f68be7-9d4d-47bd-8957-fccba153b7bf -a 10.0.0.2 -s 4420 -i 4 00:12:52.540 19:08:58 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:52.540 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:52.540 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.540 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:52.540 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:52.540 19:08:58 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.461 [ 0]:0x1 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1adc5a10799b4dfe91b0a5c649861b6c 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1adc5a10799b4dfe91b0a5c649861b6c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.461 [ 1]:0x2 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83dec1cf8265412a9218952bb2e28495 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83dec1cf8265412a9218952bb2e28495 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.461 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.722 [ 0]:0x2 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.722 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83dec1cf8265412a9218952bb2e28495 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83dec1cf8265412a9218952bb2e28495 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:54.984 19:09:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:54.984 [2024-07-12 19:09:01.008447] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:54.984 request: 00:12:54.984 { 00:12:54.984 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:54.984 "nsid": 2, 00:12:54.984 "host": "nqn.2016-06.io.spdk:host1", 00:12:54.984 "method": "nvmf_ns_remove_host", 00:12:54.984 "req_id": 1 00:12:54.984 } 00:12:54.984 Got JSON-RPC error response 00:12:54.984 response: 00:12:54.984 { 00:12:54.984 "code": -32602, 00:12:54.984 "message": "Invalid parameters" 00:12:54.984 } 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:54.984 [ 0]:0x2 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:54.984 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=83dec1cf8265412a9218952bb2e28495 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 83dec1cf8265412a9218952bb2e28495 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:55.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1328741 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1328741 /var/tmp/host.sock 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1328741 ']' 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:55.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:55.245 19:09:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:55.245 [2024-07-12 19:09:01.268380] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:12:55.245 [2024-07-12 19:09:01.268427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1328741 ] 00:12:55.245 EAL: No free 2048 kB hugepages reported on node 1 00:12:55.245 [2024-07-12 19:09:01.344506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.506 [2024-07-12 19:09:01.409794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:56.078 19:09:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:56.078 19:09:02 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:56.078 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.078 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:56.338 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 6574ecca-6c3b-4abb-baf2-599507976c5e 00:12:56.338 19:09:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:56.338 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 6574ECCA6C3B4ABBBAF2599507976C5E -i 00:12:56.338 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 73144f09-0476-490d-af64-c8d7f7cd626d 00:12:56.338 19:09:02 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:56.338 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 73144F090476490DAF64C8D7F7CD626D -i 00:12:56.598 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:56.859 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:56.859 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:56.859 19:09:02 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:57.119 nvme0n1 00:12:57.119 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:57.119 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:57.694 nvme1n2 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:57.694 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:58.018 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 6574ecca-6c3b-4abb-baf2-599507976c5e == \6\5\7\4\e\c\c\a\-\6\c\3\b\-\4\a\b\b\-\b\a\f\2\-\5\9\9\5\0\7\9\7\6\c\5\e ]] 00:12:58.018 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:58.018 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:58.018 19:09:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:58.018 19:09:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 73144f09-0476-490d-af64-c8d7f7cd626d == \7\3\1\4\4\f\0\9\-\0\4\7\6\-\4\9\0\d\-\a\f\6\4\-\c\8\d\7\f\7\c\d\6\2\6\d ]] 00:12:58.018 19:09:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1328741 00:12:58.018 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1328741 ']' 00:12:58.018 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1328741 00:12:58.018 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:58.018 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.018 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1328741 00:12:58.279 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:58.279 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:58.279 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1328741' 00:12:58.279 killing process with pid 1328741 00:12:58.279 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1328741 00:12:58.279 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1328741 00:12:58.279 19:09:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:58.540 19:09:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:58.540 19:09:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:58.540 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:58.540 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:58.541 rmmod nvme_tcp 00:12:58.541 rmmod nvme_fabrics 00:12:58.541 rmmod nvme_keyring 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1326489 ']' 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1326489 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1326489 ']' 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1326489 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1326489 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1326489' 00:12:58.541 killing process with pid 1326489 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1326489 00:12:58.541 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1326489 00:12:58.802 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:58.802 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:58.802 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:58.802 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.802 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:58.802 19:09:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.802 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.802 19:09:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.350 19:09:06 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:01.350 00:13:01.350 real 0m23.892s 00:13:01.350 user 0m23.670s 00:13:01.350 sys 0m7.292s 00:13:01.350 19:09:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:01.350 19:09:06 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:01.350 ************************************ 00:13:01.350 END TEST nvmf_ns_masking 00:13:01.350 ************************************ 00:13:01.350 19:09:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:01.350 19:09:06 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:13:01.350 19:09:06 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:01.350 19:09:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:01.350 19:09:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:01.350 19:09:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:01.350 ************************************ 00:13:01.350 START TEST nvmf_nvme_cli 00:13:01.350 ************************************ 00:13:01.350 19:09:06 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:01.350 * Looking for test storage... 00:13:01.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.350 19:09:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:13:01.351 19:09:07 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:13:07.941 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:13:07.941 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:13:07.941 Found net devices under 0000:4b:00.0: cvl_0_0 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:13:07.941 Found net devices under 0000:4b:00.1: cvl_0_1 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:07.941 19:09:13 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:07.941 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:08.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:08.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:13:08.203 00:13:08.203 --- 10.0.0.2 ping statistics --- 00:13:08.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.203 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:08.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:08.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:13:08.203 00:13:08.203 --- 10.0.0.1 ping statistics --- 00:13:08.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:08.203 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1334256 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1334256 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1334256 ']' 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:08.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:08.203 19:09:14 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:08.203 [2024-07-12 19:09:14.250416] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:13:08.203 [2024-07-12 19:09:14.250467] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:08.203 EAL: No free 2048 kB hugepages reported on node 1 00:13:08.203 [2024-07-12 19:09:14.316363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:08.463 [2024-07-12 19:09:14.382683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:08.463 [2024-07-12 19:09:14.382721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:08.463 [2024-07-12 19:09:14.382729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:08.463 [2024-07-12 19:09:14.382735] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:08.463 [2024-07-12 19:09:14.382741] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:08.463 [2024-07-12 19:09:14.382878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:08.463 [2024-07-12 19:09:14.382993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:08.463 [2024-07-12 19:09:14.383166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.463 [2024-07-12 19:09:14.383166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.033 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.034 [2024-07-12 19:09:15.063719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.034 Malloc0 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.034 Malloc1 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.034 [2024-07-12 19:09:15.153556] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.034 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -a 10.0.0.2 -s 4420 00:13:09.295 00:13:09.295 Discovery Log Number of Records 2, Generation counter 2 00:13:09.295 =====Discovery Log Entry 0====== 00:13:09.295 trtype: tcp 00:13:09.295 adrfam: ipv4 00:13:09.295 subtype: current discovery subsystem 00:13:09.295 treq: not required 00:13:09.295 portid: 0 00:13:09.295 trsvcid: 4420 00:13:09.295 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:09.295 traddr: 10.0.0.2 00:13:09.295 eflags: explicit discovery connections, duplicate discovery information 00:13:09.295 sectype: none 00:13:09.295 =====Discovery Log Entry 1====== 00:13:09.295 trtype: tcp 00:13:09.295 adrfam: ipv4 00:13:09.295 subtype: nvme subsystem 00:13:09.295 treq: not required 00:13:09.295 portid: 0 00:13:09.295 trsvcid: 4420 00:13:09.295 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:09.295 traddr: 10.0.0.2 00:13:09.295 eflags: none 00:13:09.295 sectype: none 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:09.295 19:09:15 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.210 19:09:16 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:11.210 19:09:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:13:11.210 19:09:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.210 19:09:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:13:11.210 19:09:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:13:11.210 19:09:16 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:13:13.123 /dev/nvme0n1 ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:13.123 19:09:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:13.123 rmmod nvme_tcp 00:13:13.123 rmmod nvme_fabrics 00:13:13.123 rmmod nvme_keyring 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1334256 ']' 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1334256 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1334256 ']' 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1334256 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:13.123 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1334256 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1334256' 00:13:13.384 killing process with pid 1334256 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1334256 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1334256 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:13.384 19:09:19 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:15.930 19:09:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:15.930 00:13:15.930 real 0m14.557s 00:13:15.930 user 0m22.085s 00:13:15.930 sys 0m5.912s 00:13:15.930 19:09:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:15.930 19:09:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:15.930 ************************************ 00:13:15.930 END TEST nvmf_nvme_cli 00:13:15.930 ************************************ 00:13:15.930 19:09:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:15.930 19:09:21 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:13:15.930 19:09:21 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:15.930 19:09:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:15.930 19:09:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.930 19:09:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:15.930 ************************************ 00:13:15.930 START TEST nvmf_vfio_user 00:13:15.930 ************************************ 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:15.930 * Looking for test storage... 00:13:15.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1335736 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1335736' 00:13:15.930 Process pid: 1335736 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1335736 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1335736 ']' 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:15.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:15.930 19:09:21 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:15.930 [2024-07-12 19:09:21.759994] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:13:15.930 [2024-07-12 19:09:21.760052] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:15.930 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.930 [2024-07-12 19:09:21.823252] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:15.930 [2024-07-12 19:09:21.896117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:15.930 [2024-07-12 19:09:21.896158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:15.930 [2024-07-12 19:09:21.896166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:15.930 [2024-07-12 19:09:21.896172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:15.930 [2024-07-12 19:09:21.896178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:15.930 [2024-07-12 19:09:21.896315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:15.930 [2024-07-12 19:09:21.896514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:15.930 [2024-07-12 19:09:21.896670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:15.930 [2024-07-12 19:09:21.896670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.502 19:09:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:16.502 19:09:22 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:16.502 19:09:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:17.441 19:09:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:17.701 19:09:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:17.701 19:09:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:17.701 19:09:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:17.701 19:09:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:17.701 19:09:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:17.962 Malloc1 00:13:17.962 19:09:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:17.962 19:09:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:18.222 19:09:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:18.482 19:09:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:18.482 19:09:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:18.482 19:09:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:18.482 Malloc2 00:13:18.482 19:09:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:18.743 19:09:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:19.003 19:09:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:19.003 19:09:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:19.003 19:09:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:19.003 19:09:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:19.003 19:09:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:19.003 19:09:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:19.003 19:09:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:19.003 [2024-07-12 19:09:25.121660] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:13:19.003 [2024-07-12 19:09:25.121706] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1336436 ] 00:13:19.003 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.264 [2024-07-12 19:09:25.154745] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:19.264 [2024-07-12 19:09:25.160096] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:19.264 [2024-07-12 19:09:25.160115] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb63b7fe000 00:13:19.264 [2024-07-12 19:09:25.161083] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.264 [2024-07-12 19:09:25.162090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.264 [2024-07-12 19:09:25.163090] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.264 [2024-07-12 19:09:25.164097] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:19.264 [2024-07-12 19:09:25.165106] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:19.264 [2024-07-12 19:09:25.166109] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.264 [2024-07-12 19:09:25.167114] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:19.264 [2024-07-12 19:09:25.168120] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:19.264 [2024-07-12 19:09:25.169130] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:19.264 [2024-07-12 19:09:25.169143] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb63b7f3000 00:13:19.264 [2024-07-12 19:09:25.170469] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:19.264 [2024-07-12 19:09:25.187378] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:19.264 [2024-07-12 19:09:25.187405] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:13:19.264 [2024-07-12 19:09:25.192258] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:19.265 [2024-07-12 19:09:25.192307] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:19.265 [2024-07-12 19:09:25.192396] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:13:19.265 [2024-07-12 19:09:25.192415] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:13:19.265 [2024-07-12 19:09:25.192421] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:13:19.265 [2024-07-12 19:09:25.193264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:19.265 [2024-07-12 19:09:25.193273] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:13:19.265 [2024-07-12 19:09:25.193280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:13:19.265 [2024-07-12 19:09:25.194269] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:19.265 [2024-07-12 19:09:25.194278] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:13:19.265 [2024-07-12 19:09:25.194285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:13:19.265 [2024-07-12 19:09:25.195277] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:19.265 [2024-07-12 19:09:25.195285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:19.265 [2024-07-12 19:09:25.196278] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:19.265 [2024-07-12 19:09:25.196286] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:13:19.265 [2024-07-12 19:09:25.196291] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:13:19.265 [2024-07-12 19:09:25.196298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:19.265 [2024-07-12 19:09:25.196403] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:13:19.265 [2024-07-12 19:09:25.196408] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:19.265 [2024-07-12 19:09:25.196413] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:19.265 [2024-07-12 19:09:25.197290] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:19.265 [2024-07-12 19:09:25.198296] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:19.265 [2024-07-12 19:09:25.199300] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:19.265 [2024-07-12 19:09:25.200296] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:19.265 [2024-07-12 19:09:25.200350] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:19.265 [2024-07-12 19:09:25.201316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:19.265 [2024-07-12 19:09:25.201324] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:19.265 [2024-07-12 19:09:25.201328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201350] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:13:19.265 [2024-07-12 19:09:25.201357] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201373] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:19.265 [2024-07-12 19:09:25.201379] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:19.265 [2024-07-12 19:09:25.201392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201439] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:13:19.265 [2024-07-12 19:09:25.201446] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:13:19.265 [2024-07-12 19:09:25.201450] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:13:19.265 [2024-07-12 19:09:25.201455] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:19.265 [2024-07-12 19:09:25.201459] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:13:19.265 [2024-07-12 19:09:25.201464] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:13:19.265 [2024-07-12 19:09:25.201469] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201477] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201487] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.265 [2024-07-12 19:09:25.201520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.265 [2024-07-12 19:09:25.201532] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.265 [2024-07-12 19:09:25.201540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.265 [2024-07-12 19:09:25.201545] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201553] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201562] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201574] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:13:19.265 [2024-07-12 19:09:25.201579] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201586] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201673] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201689] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:19.265 [2024-07-12 19:09:25.201693] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:19.265 [2024-07-12 19:09:25.201699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201720] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:13:19.265 [2024-07-12 19:09:25.201732] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201740] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201747] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:19.265 [2024-07-12 19:09:25.201751] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:19.265 [2024-07-12 19:09:25.201757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201806] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:19.265 [2024-07-12 19:09:25.201810] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:19.265 [2024-07-12 19:09:25.201816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201842] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201849] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201855] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201865] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201870] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:13:19.265 [2024-07-12 19:09:25.201874] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:13:19.265 [2024-07-12 19:09:25.201879] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:13:19.265 [2024-07-12 19:09:25.201897] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201941] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.201972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.201985] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:19.265 [2024-07-12 19:09:25.201989] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:19.265 [2024-07-12 19:09:25.201993] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:19.265 [2024-07-12 19:09:25.201998] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:19.265 [2024-07-12 19:09:25.202004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:19.265 [2024-07-12 19:09:25.202012] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:19.265 [2024-07-12 19:09:25.202016] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:19.265 [2024-07-12 19:09:25.202022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.202030] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:19.265 [2024-07-12 19:09:25.202034] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:19.265 [2024-07-12 19:09:25.202040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.202048] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:19.265 [2024-07-12 19:09:25.202052] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:19.265 [2024-07-12 19:09:25.202058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:19.265 [2024-07-12 19:09:25.202065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.202076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.202087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:19.265 [2024-07-12 19:09:25.202094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:19.265 ===================================================== 00:13:19.265 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:19.265 ===================================================== 00:13:19.265 Controller Capabilities/Features 00:13:19.265 ================================ 00:13:19.265 Vendor ID: 4e58 00:13:19.265 Subsystem Vendor ID: 4e58 00:13:19.265 Serial Number: SPDK1 00:13:19.265 Model Number: SPDK bdev Controller 00:13:19.265 Firmware Version: 24.09 00:13:19.265 Recommended Arb Burst: 6 00:13:19.265 IEEE OUI Identifier: 8d 6b 50 00:13:19.265 Multi-path I/O 00:13:19.265 May have multiple subsystem ports: Yes 00:13:19.265 May have multiple controllers: Yes 00:13:19.265 Associated with SR-IOV VF: No 00:13:19.265 Max Data Transfer Size: 131072 00:13:19.265 Max Number of Namespaces: 32 00:13:19.265 Max Number of I/O Queues: 127 00:13:19.265 NVMe Specification Version (VS): 1.3 00:13:19.265 NVMe Specification Version (Identify): 1.3 00:13:19.265 Maximum Queue Entries: 256 00:13:19.265 Contiguous Queues Required: Yes 00:13:19.265 Arbitration Mechanisms Supported 00:13:19.265 Weighted Round Robin: Not Supported 00:13:19.265 Vendor Specific: Not Supported 00:13:19.265 Reset Timeout: 15000 ms 00:13:19.265 Doorbell Stride: 4 bytes 00:13:19.265 NVM Subsystem Reset: Not Supported 00:13:19.265 Command Sets Supported 00:13:19.265 NVM Command Set: Supported 00:13:19.265 Boot Partition: Not Supported 00:13:19.265 Memory Page Size Minimum: 4096 bytes 00:13:19.265 Memory Page Size Maximum: 4096 bytes 00:13:19.265 Persistent Memory Region: Not Supported 00:13:19.265 Optional Asynchronous Events Supported 00:13:19.265 Namespace Attribute Notices: Supported 00:13:19.265 Firmware Activation Notices: Not Supported 00:13:19.265 ANA Change Notices: Not Supported 00:13:19.265 PLE Aggregate Log Change Notices: Not Supported 00:13:19.265 LBA Status Info Alert Notices: Not Supported 00:13:19.265 EGE Aggregate Log Change Notices: Not Supported 00:13:19.265 Normal NVM Subsystem Shutdown event: Not Supported 00:13:19.265 Zone Descriptor Change Notices: Not Supported 00:13:19.265 Discovery Log Change Notices: Not Supported 00:13:19.265 Controller Attributes 00:13:19.265 128-bit Host Identifier: Supported 00:13:19.265 Non-Operational Permissive Mode: Not Supported 00:13:19.265 NVM Sets: Not Supported 00:13:19.265 Read Recovery Levels: Not Supported 00:13:19.265 Endurance Groups: Not Supported 00:13:19.265 Predictable Latency Mode: Not Supported 00:13:19.265 Traffic Based Keep ALive: Not Supported 00:13:19.265 Namespace Granularity: Not Supported 00:13:19.265 SQ Associations: Not Supported 00:13:19.265 UUID List: Not Supported 00:13:19.265 Multi-Domain Subsystem: Not Supported 00:13:19.265 Fixed Capacity Management: Not Supported 00:13:19.265 Variable Capacity Management: Not Supported 00:13:19.265 Delete Endurance Group: Not Supported 00:13:19.265 Delete NVM Set: Not Supported 00:13:19.265 Extended LBA Formats Supported: Not Supported 00:13:19.265 Flexible Data Placement Supported: Not Supported 00:13:19.265 00:13:19.265 Controller Memory Buffer Support 00:13:19.265 ================================ 00:13:19.265 Supported: No 00:13:19.265 00:13:19.265 Persistent Memory Region Support 00:13:19.265 ================================ 00:13:19.265 Supported: No 00:13:19.265 00:13:19.266 Admin Command Set Attributes 00:13:19.266 ============================ 00:13:19.266 Security Send/Receive: Not Supported 00:13:19.266 Format NVM: Not Supported 00:13:19.266 Firmware Activate/Download: Not Supported 00:13:19.266 Namespace Management: Not Supported 00:13:19.266 Device Self-Test: Not Supported 00:13:19.266 Directives: Not Supported 00:13:19.266 NVMe-MI: Not Supported 00:13:19.266 Virtualization Management: Not Supported 00:13:19.266 Doorbell Buffer Config: Not Supported 00:13:19.266 Get LBA Status Capability: Not Supported 00:13:19.266 Command & Feature Lockdown Capability: Not Supported 00:13:19.266 Abort Command Limit: 4 00:13:19.266 Async Event Request Limit: 4 00:13:19.266 Number of Firmware Slots: N/A 00:13:19.266 Firmware Slot 1 Read-Only: N/A 00:13:19.266 Firmware Activation Without Reset: N/A 00:13:19.266 Multiple Update Detection Support: N/A 00:13:19.266 Firmware Update Granularity: No Information Provided 00:13:19.266 Per-Namespace SMART Log: No 00:13:19.266 Asymmetric Namespace Access Log Page: Not Supported 00:13:19.266 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:19.266 Command Effects Log Page: Supported 00:13:19.266 Get Log Page Extended Data: Supported 00:13:19.266 Telemetry Log Pages: Not Supported 00:13:19.266 Persistent Event Log Pages: Not Supported 00:13:19.266 Supported Log Pages Log Page: May Support 00:13:19.266 Commands Supported & Effects Log Page: Not Supported 00:13:19.266 Feature Identifiers & Effects Log Page:May Support 00:13:19.266 NVMe-MI Commands & Effects Log Page: May Support 00:13:19.266 Data Area 4 for Telemetry Log: Not Supported 00:13:19.266 Error Log Page Entries Supported: 128 00:13:19.266 Keep Alive: Supported 00:13:19.266 Keep Alive Granularity: 10000 ms 00:13:19.266 00:13:19.266 NVM Command Set Attributes 00:13:19.266 ========================== 00:13:19.266 Submission Queue Entry Size 00:13:19.266 Max: 64 00:13:19.266 Min: 64 00:13:19.266 Completion Queue Entry Size 00:13:19.266 Max: 16 00:13:19.266 Min: 16 00:13:19.266 Number of Namespaces: 32 00:13:19.266 Compare Command: Supported 00:13:19.266 Write Uncorrectable Command: Not Supported 00:13:19.266 Dataset Management Command: Supported 00:13:19.266 Write Zeroes Command: Supported 00:13:19.266 Set Features Save Field: Not Supported 00:13:19.266 Reservations: Not Supported 00:13:19.266 Timestamp: Not Supported 00:13:19.266 Copy: Supported 00:13:19.266 Volatile Write Cache: Present 00:13:19.266 Atomic Write Unit (Normal): 1 00:13:19.266 Atomic Write Unit (PFail): 1 00:13:19.266 Atomic Compare & Write Unit: 1 00:13:19.266 Fused Compare & Write: Supported 00:13:19.266 Scatter-Gather List 00:13:19.266 SGL Command Set: Supported (Dword aligned) 00:13:19.266 SGL Keyed: Not Supported 00:13:19.266 SGL Bit Bucket Descriptor: Not Supported 00:13:19.266 SGL Metadata Pointer: Not Supported 00:13:19.266 Oversized SGL: Not Supported 00:13:19.266 SGL Metadata Address: Not Supported 00:13:19.266 SGL Offset: Not Supported 00:13:19.266 Transport SGL Data Block: Not Supported 00:13:19.266 Replay Protected Memory Block: Not Supported 00:13:19.266 00:13:19.266 Firmware Slot Information 00:13:19.266 ========================= 00:13:19.266 Active slot: 1 00:13:19.266 Slot 1 Firmware Revision: 24.09 00:13:19.266 00:13:19.266 00:13:19.266 Commands Supported and Effects 00:13:19.266 ============================== 00:13:19.266 Admin Commands 00:13:19.266 -------------- 00:13:19.266 Get Log Page (02h): Supported 00:13:19.266 Identify (06h): Supported 00:13:19.266 Abort (08h): Supported 00:13:19.266 Set Features (09h): Supported 00:13:19.266 Get Features (0Ah): Supported 00:13:19.266 Asynchronous Event Request (0Ch): Supported 00:13:19.266 Keep Alive (18h): Supported 00:13:19.266 I/O Commands 00:13:19.266 ------------ 00:13:19.266 Flush (00h): Supported LBA-Change 00:13:19.266 Write (01h): Supported LBA-Change 00:13:19.266 Read (02h): Supported 00:13:19.266 Compare (05h): Supported 00:13:19.266 Write Zeroes (08h): Supported LBA-Change 00:13:19.266 Dataset Management (09h): Supported LBA-Change 00:13:19.266 Copy (19h): Supported LBA-Change 00:13:19.266 00:13:19.266 Error Log 00:13:19.266 ========= 00:13:19.266 00:13:19.266 Arbitration 00:13:19.266 =========== 00:13:19.266 Arbitration Burst: 1 00:13:19.266 00:13:19.266 Power Management 00:13:19.266 ================ 00:13:19.266 Number of Power States: 1 00:13:19.266 Current Power State: Power State #0 00:13:19.266 Power State #0: 00:13:19.266 Max Power: 0.00 W 00:13:19.266 Non-Operational State: Operational 00:13:19.266 Entry Latency: Not Reported 00:13:19.266 Exit Latency: Not Reported 00:13:19.266 Relative Read Throughput: 0 00:13:19.266 Relative Read Latency: 0 00:13:19.266 Relative Write Throughput: 0 00:13:19.266 Relative Write Latency: 0 00:13:19.266 Idle Power: Not Reported 00:13:19.266 Active Power: Not Reported 00:13:19.266 Non-Operational Permissive Mode: Not Supported 00:13:19.266 00:13:19.266 Health Information 00:13:19.266 ================== 00:13:19.266 Critical Warnings: 00:13:19.266 Available Spare Space: OK 00:13:19.266 Temperature: OK 00:13:19.266 Device Reliability: OK 00:13:19.266 Read Only: No 00:13:19.266 Volatile Memory Backup: OK 00:13:19.266 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:19.266 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:19.266 Available Spare: 0% 00:13:19.266 Available Sp[2024-07-12 19:09:25.202201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:19.266 [2024-07-12 19:09:25.202209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:19.266 [2024-07-12 19:09:25.202237] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:13:19.266 [2024-07-12 19:09:25.202246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.266 [2024-07-12 19:09:25.202253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.266 [2024-07-12 19:09:25.202259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.266 [2024-07-12 19:09:25.202265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.266 [2024-07-12 19:09:25.202321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:19.266 [2024-07-12 19:09:25.202331] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:19.266 [2024-07-12 19:09:25.203322] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:19.266 [2024-07-12 19:09:25.203362] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:13:19.266 [2024-07-12 19:09:25.203368] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:13:19.266 [2024-07-12 19:09:25.204337] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:19.266 [2024-07-12 19:09:25.204350] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:13:19.266 [2024-07-12 19:09:25.204412] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:19.266 [2024-07-12 19:09:25.208129] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:19.266 are Threshold: 0% 00:13:19.266 Life Percentage Used: 0% 00:13:19.266 Data Units Read: 0 00:13:19.266 Data Units Written: 0 00:13:19.266 Host Read Commands: 0 00:13:19.266 Host Write Commands: 0 00:13:19.266 Controller Busy Time: 0 minutes 00:13:19.266 Power Cycles: 0 00:13:19.266 Power On Hours: 0 hours 00:13:19.266 Unsafe Shutdowns: 0 00:13:19.266 Unrecoverable Media Errors: 0 00:13:19.266 Lifetime Error Log Entries: 0 00:13:19.266 Warning Temperature Time: 0 minutes 00:13:19.266 Critical Temperature Time: 0 minutes 00:13:19.266 00:13:19.266 Number of Queues 00:13:19.266 ================ 00:13:19.266 Number of I/O Submission Queues: 127 00:13:19.266 Number of I/O Completion Queues: 127 00:13:19.266 00:13:19.266 Active Namespaces 00:13:19.266 ================= 00:13:19.266 Namespace ID:1 00:13:19.266 Error Recovery Timeout: Unlimited 00:13:19.266 Command Set Identifier: NVM (00h) 00:13:19.266 Deallocate: Supported 00:13:19.266 Deallocated/Unwritten Error: Not Supported 00:13:19.266 Deallocated Read Value: Unknown 00:13:19.266 Deallocate in Write Zeroes: Not Supported 00:13:19.266 Deallocated Guard Field: 0xFFFF 00:13:19.266 Flush: Supported 00:13:19.266 Reservation: Supported 00:13:19.266 Namespace Sharing Capabilities: Multiple Controllers 00:13:19.266 Size (in LBAs): 131072 (0GiB) 00:13:19.266 Capacity (in LBAs): 131072 (0GiB) 00:13:19.266 Utilization (in LBAs): 131072 (0GiB) 00:13:19.266 NGUID: 7D1FAB5E568342FEBAEB509DB97845FC 00:13:19.266 UUID: 7d1fab5e-5683-42fe-baeb-509db97845fc 00:13:19.266 Thin Provisioning: Not Supported 00:13:19.266 Per-NS Atomic Units: Yes 00:13:19.266 Atomic Boundary Size (Normal): 0 00:13:19.266 Atomic Boundary Size (PFail): 0 00:13:19.266 Atomic Boundary Offset: 0 00:13:19.266 Maximum Single Source Range Length: 65535 00:13:19.266 Maximum Copy Length: 65535 00:13:19.266 Maximum Source Range Count: 1 00:13:19.266 NGUID/EUI64 Never Reused: No 00:13:19.266 Namespace Write Protected: No 00:13:19.266 Number of LBA Formats: 1 00:13:19.266 Current LBA Format: LBA Format #00 00:13:19.266 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:19.266 00:13:19.266 19:09:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:19.266 EAL: No free 2048 kB hugepages reported on node 1 00:13:19.266 [2024-07-12 19:09:25.391747] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:24.545 Initializing NVMe Controllers 00:13:24.545 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:24.545 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:24.545 Initialization complete. Launching workers. 00:13:24.545 ======================================================== 00:13:24.545 Latency(us) 00:13:24.545 Device Information : IOPS MiB/s Average min max 00:13:24.545 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39967.59 156.12 3202.47 841.76 6879.44 00:13:24.545 ======================================================== 00:13:24.545 Total : 39967.59 156.12 3202.47 841.76 6879.44 00:13:24.545 00:13:24.545 [2024-07-12 19:09:30.410172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:24.545 19:09:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:24.545 EAL: No free 2048 kB hugepages reported on node 1 00:13:24.545 [2024-07-12 19:09:30.594038] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:29.831 Initializing NVMe Controllers 00:13:29.831 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:29.831 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:29.831 Initialization complete. Launching workers. 00:13:29.831 ======================================================== 00:13:29.831 Latency(us) 00:13:29.831 Device Information : IOPS MiB/s Average min max 00:13:29.831 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7980.71 7625.04 8059.77 00:13:29.831 ======================================================== 00:13:29.831 Total : 16051.20 62.70 7980.71 7625.04 8059.77 00:13:29.831 00:13:29.831 [2024-07-12 19:09:35.629179] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:29.831 19:09:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:29.831 EAL: No free 2048 kB hugepages reported on node 1 00:13:29.831 [2024-07-12 19:09:35.824069] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:35.121 [2024-07-12 19:09:40.893304] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:35.121 Initializing NVMe Controllers 00:13:35.121 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:35.121 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:35.121 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:35.121 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:35.121 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:35.121 Initialization complete. Launching workers. 00:13:35.121 Starting thread on core 2 00:13:35.121 Starting thread on core 3 00:13:35.121 Starting thread on core 1 00:13:35.121 19:09:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:35.121 EAL: No free 2048 kB hugepages reported on node 1 00:13:35.121 [2024-07-12 19:09:41.152626] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.422 [2024-07-12 19:09:44.211830] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.422 Initializing NVMe Controllers 00:13:38.422 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.422 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.422 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:38.422 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:38.422 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:38.422 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:38.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:38.422 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:38.422 Initialization complete. Launching workers. 00:13:38.422 Starting thread on core 1 with urgent priority queue 00:13:38.422 Starting thread on core 2 with urgent priority queue 00:13:38.422 Starting thread on core 3 with urgent priority queue 00:13:38.422 Starting thread on core 0 with urgent priority queue 00:13:38.422 SPDK bdev Controller (SPDK1 ) core 0: 8135.67 IO/s 12.29 secs/100000 ios 00:13:38.422 SPDK bdev Controller (SPDK1 ) core 1: 8089.00 IO/s 12.36 secs/100000 ios 00:13:38.422 SPDK bdev Controller (SPDK1 ) core 2: 10798.33 IO/s 9.26 secs/100000 ios 00:13:38.422 SPDK bdev Controller (SPDK1 ) core 3: 8339.00 IO/s 11.99 secs/100000 ios 00:13:38.422 ======================================================== 00:13:38.422 00:13:38.422 19:09:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:38.422 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.422 [2024-07-12 19:09:44.482620] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:38.422 Initializing NVMe Controllers 00:13:38.422 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.422 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:38.422 Namespace ID: 1 size: 0GB 00:13:38.422 Initialization complete. 00:13:38.422 INFO: using host memory buffer for IO 00:13:38.422 Hello world! 00:13:38.422 [2024-07-12 19:09:44.516815] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:38.702 19:09:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:38.702 EAL: No free 2048 kB hugepages reported on node 1 00:13:38.702 [2024-07-12 19:09:44.766425] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:39.700 Initializing NVMe Controllers 00:13:39.701 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:39.701 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:39.701 Initialization complete. Launching workers. 00:13:39.701 submit (in ns) avg, min, max = 8040.0, 3902.5, 4996915.0 00:13:39.701 complete (in ns) avg, min, max = 17942.3, 2383.3, 4074743.3 00:13:39.701 00:13:39.701 Submit histogram 00:13:39.701 ================ 00:13:39.701 Range in us Cumulative Count 00:13:39.701 3.893 - 3.920: 0.9939% ( 192) 00:13:39.701 3.920 - 3.947: 5.5961% ( 889) 00:13:39.701 3.947 - 3.973: 14.4484% ( 1710) 00:13:39.701 3.973 - 4.000: 26.1997% ( 2270) 00:13:39.701 4.000 - 4.027: 37.2004% ( 2125) 00:13:39.701 4.027 - 4.053: 48.4392% ( 2171) 00:13:39.701 4.053 - 4.080: 63.6175% ( 2932) 00:13:39.701 4.080 - 4.107: 78.3093% ( 2838) 00:13:39.701 4.107 - 4.133: 89.9467% ( 2248) 00:13:39.701 4.133 - 4.160: 96.1329% ( 1195) 00:13:39.701 4.160 - 4.187: 98.5557% ( 468) 00:13:39.701 4.187 - 4.213: 99.2494% ( 134) 00:13:39.701 4.213 - 4.240: 99.4771% ( 44) 00:13:39.701 4.240 - 4.267: 99.5237% ( 9) 00:13:39.701 4.267 - 4.293: 99.5341% ( 2) 00:13:39.701 4.293 - 4.320: 99.5393% ( 1) 00:13:39.701 4.373 - 4.400: 99.5444% ( 1) 00:13:39.701 4.400 - 4.427: 99.5496% ( 1) 00:13:39.701 4.880 - 4.907: 99.5548% ( 1) 00:13:39.701 4.933 - 4.960: 99.5600% ( 1) 00:13:39.701 5.120 - 5.147: 99.5703% ( 2) 00:13:39.701 5.227 - 5.253: 99.5755% ( 1) 00:13:39.701 5.493 - 5.520: 99.5807% ( 1) 00:13:39.701 5.547 - 5.573: 99.5859% ( 1) 00:13:39.701 5.600 - 5.627: 99.5910% ( 1) 00:13:39.701 5.707 - 5.733: 99.5962% ( 1) 00:13:39.701 5.787 - 5.813: 99.6014% ( 1) 00:13:39.701 5.973 - 6.000: 99.6117% ( 2) 00:13:39.701 6.053 - 6.080: 99.6221% ( 2) 00:13:39.701 6.160 - 6.187: 99.6273% ( 1) 00:13:39.701 6.187 - 6.213: 99.6480% ( 4) 00:13:39.701 6.213 - 6.240: 99.6583% ( 2) 00:13:39.701 6.240 - 6.267: 99.6635% ( 1) 00:13:39.701 6.267 - 6.293: 99.6687% ( 1) 00:13:39.701 6.320 - 6.347: 99.6739% ( 1) 00:13:39.701 6.347 - 6.373: 99.6790% ( 1) 00:13:39.701 6.480 - 6.507: 99.6842% ( 1) 00:13:39.701 6.507 - 6.533: 99.6997% ( 3) 00:13:39.701 6.560 - 6.587: 99.7101% ( 2) 00:13:39.701 6.613 - 6.640: 99.7360% ( 5) 00:13:39.701 6.640 - 6.667: 99.7412% ( 1) 00:13:39.701 6.693 - 6.720: 99.7515% ( 2) 00:13:39.701 6.720 - 6.747: 99.7619% ( 2) 00:13:39.701 6.747 - 6.773: 99.7670% ( 1) 00:13:39.701 6.800 - 6.827: 99.7722% ( 1) 00:13:39.701 6.880 - 6.933: 99.7981% ( 5) 00:13:39.701 6.933 - 6.987: 99.8085% ( 2) 00:13:39.701 6.987 - 7.040: 99.8136% ( 1) 00:13:39.701 7.040 - 7.093: 99.8240% ( 2) 00:13:39.701 7.200 - 7.253: 99.8395% ( 3) 00:13:39.701 7.360 - 7.413: 99.8499% ( 2) 00:13:39.701 7.467 - 7.520: 99.8654% ( 3) 00:13:39.701 7.573 - 7.627: 99.8758% ( 2) 00:13:39.701 7.627 - 7.680: 99.8809% ( 1) 00:13:39.701 7.680 - 7.733: 99.8861% ( 1) 00:13:39.701 8.053 - 8.107: 99.8913% ( 1) 00:13:39.701 12.533 - 12.587: 99.8965% ( 1) 00:13:39.701 13.280 - 13.333: 99.9016% ( 1) 00:13:39.701 3986.773 - 4014.080: 99.9948% ( 18) 00:13:39.701 4969.813 - 4997.120: 100.0000% ( 1) 00:13:39.701 00:13:39.701 Complete histogram 00:13:39.701 ================== 00:13:39.701 Range in us Cumulative Count 00:13:39.701 2.373 - 2.387: 0.0052% ( 1) 00:13:39.701 2.387 - 2.400: 0.3365% ( 64) 00:13:39.701 2.400 - 2.413: 1.0302% ( 134) 00:13:39.701 2.413 - 2.427: 1.1389% ( 21) 00:13:39.701 2.427 - 2.440: 28.9952% ( 5381) 00:13:39.701 2.440 - 2.453: 55.5521% ( 5130) 00:13:39.701 2.453 - 2.467: 66.3664% ( 2089) 00:13:39.701 2.467 - 2.480: 76.6475% ( 1986) 00:13:39.701 2.480 - 2.493: 81.3687% ( 912) 00:13:39.701 2.493 - 2.507: 83.4446% ( 401) 00:13:39.701 2.507 - 2.520: 89.5015% ( 1170) 00:13:39.701 2.520 - 2.533: 94.3159% ( 930) 00:13:39.701 2.533 - 2.547: 96.6869% ( 458) 00:13:39.701 2.547 - 2.560: 98.3693% ( 325) 00:13:39.701 2.560 - 2.573: 99.1717% ( 155) 00:13:39.701 2.573 - 2.587: 99.3167% ( 28) 00:13:39.701 2.587 - 2.600: 99.3633% ( 9) 00:13:39.701 2.600 - [2024-07-12 19:09:45.788875] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:39.981 2.613: 99.3788% ( 3) 00:13:39.981 2.613 - 2.627: 99.3891% ( 2) 00:13:39.981 4.240 - 4.267: 99.3943% ( 1) 00:13:39.981 4.320 - 4.347: 99.3995% ( 1) 00:13:39.981 4.507 - 4.533: 99.4047% ( 1) 00:13:39.981 4.587 - 4.613: 99.4098% ( 1) 00:13:39.981 4.613 - 4.640: 99.4150% ( 1) 00:13:39.981 4.693 - 4.720: 99.4202% ( 1) 00:13:39.981 4.827 - 4.853: 99.4409% ( 4) 00:13:39.981 4.853 - 4.880: 99.4461% ( 1) 00:13:39.981 4.880 - 4.907: 99.4513% ( 1) 00:13:39.981 4.907 - 4.933: 99.4616% ( 2) 00:13:39.981 4.933 - 4.960: 99.4720% ( 2) 00:13:39.981 4.987 - 5.013: 99.4771% ( 1) 00:13:39.981 5.040 - 5.067: 99.4823% ( 1) 00:13:39.981 5.093 - 5.120: 99.4875% ( 1) 00:13:39.981 5.173 - 5.200: 99.4927% ( 1) 00:13:39.981 5.227 - 5.253: 99.4979% ( 1) 00:13:39.981 5.253 - 5.280: 99.5082% ( 2) 00:13:39.981 5.280 - 5.307: 99.5186% ( 2) 00:13:39.981 5.467 - 5.493: 99.5237% ( 1) 00:13:39.981 5.547 - 5.573: 99.5289% ( 1) 00:13:39.981 5.573 - 5.600: 99.5341% ( 1) 00:13:39.981 5.627 - 5.653: 99.5393% ( 1) 00:13:39.981 5.653 - 5.680: 99.5496% ( 2) 00:13:39.981 5.680 - 5.707: 99.5548% ( 1) 00:13:39.982 5.760 - 5.787: 99.5651% ( 2) 00:13:39.982 5.867 - 5.893: 99.5703% ( 1) 00:13:39.982 5.920 - 5.947: 99.5755% ( 1) 00:13:39.982 6.213 - 6.240: 99.5807% ( 1) 00:13:39.982 6.880 - 6.933: 99.5859% ( 1) 00:13:39.982 7.147 - 7.200: 99.5910% ( 1) 00:13:39.982 7.200 - 7.253: 99.5962% ( 1) 00:13:39.982 10.400 - 10.453: 99.6014% ( 1) 00:13:39.982 43.733 - 43.947: 99.6066% ( 1) 00:13:39.982 167.253 - 168.107: 99.6117% ( 1) 00:13:39.982 3003.733 - 3017.387: 99.6169% ( 1) 00:13:39.982 3986.773 - 4014.080: 99.9948% ( 73) 00:13:39.982 4068.693 - 4096.000: 100.0000% ( 1) 00:13:39.982 00:13:39.982 19:09:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:39.982 19:09:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:39.982 19:09:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:39.982 19:09:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:39.982 19:09:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:39.982 [ 00:13:39.982 { 00:13:39.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:39.982 "subtype": "Discovery", 00:13:39.982 "listen_addresses": [], 00:13:39.982 "allow_any_host": true, 00:13:39.982 "hosts": [] 00:13:39.982 }, 00:13:39.982 { 00:13:39.982 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:39.982 "subtype": "NVMe", 00:13:39.982 "listen_addresses": [ 00:13:39.982 { 00:13:39.982 "trtype": "VFIOUSER", 00:13:39.982 "adrfam": "IPv4", 00:13:39.982 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:39.982 "trsvcid": "0" 00:13:39.982 } 00:13:39.982 ], 00:13:39.982 "allow_any_host": true, 00:13:39.982 "hosts": [], 00:13:39.982 "serial_number": "SPDK1", 00:13:39.982 "model_number": "SPDK bdev Controller", 00:13:39.982 "max_namespaces": 32, 00:13:39.982 "min_cntlid": 1, 00:13:39.982 "max_cntlid": 65519, 00:13:39.982 "namespaces": [ 00:13:39.982 { 00:13:39.982 "nsid": 1, 00:13:39.982 "bdev_name": "Malloc1", 00:13:39.982 "name": "Malloc1", 00:13:39.982 "nguid": "7D1FAB5E568342FEBAEB509DB97845FC", 00:13:39.982 "uuid": "7d1fab5e-5683-42fe-baeb-509db97845fc" 00:13:39.982 } 00:13:39.982 ] 00:13:39.982 }, 00:13:39.982 { 00:13:39.982 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:39.982 "subtype": "NVMe", 00:13:39.982 "listen_addresses": [ 00:13:39.982 { 00:13:39.982 "trtype": "VFIOUSER", 00:13:39.982 "adrfam": "IPv4", 00:13:39.982 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:39.982 "trsvcid": "0" 00:13:39.982 } 00:13:39.982 ], 00:13:39.982 "allow_any_host": true, 00:13:39.982 "hosts": [], 00:13:39.982 "serial_number": "SPDK2", 00:13:39.982 "model_number": "SPDK bdev Controller", 00:13:39.982 "max_namespaces": 32, 00:13:39.982 "min_cntlid": 1, 00:13:39.982 "max_cntlid": 65519, 00:13:39.982 "namespaces": [ 00:13:39.982 { 00:13:39.982 "nsid": 1, 00:13:39.982 "bdev_name": "Malloc2", 00:13:39.982 "name": "Malloc2", 00:13:39.982 "nguid": "2AA2A7C5E07444588D138C88BDE92197", 00:13:39.982 "uuid": "2aa2a7c5-e074-4458-8d13-8c88bde92197" 00:13:39.982 } 00:13:39.982 ] 00:13:39.982 } 00:13:39.982 ] 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1340578 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:39.982 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:39.982 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.243 Malloc3 00:13:40.243 [2024-07-12 19:09:46.183569] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:40.243 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:40.243 [2024-07-12 19:09:46.340547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:40.243 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:40.504 Asynchronous Event Request test 00:13:40.504 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:40.504 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:40.504 Registering asynchronous event callbacks... 00:13:40.504 Starting namespace attribute notice tests for all controllers... 00:13:40.504 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:40.504 aer_cb - Changed Namespace 00:13:40.504 Cleaning up... 00:13:40.504 [ 00:13:40.504 { 00:13:40.504 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:40.504 "subtype": "Discovery", 00:13:40.504 "listen_addresses": [], 00:13:40.504 "allow_any_host": true, 00:13:40.504 "hosts": [] 00:13:40.504 }, 00:13:40.504 { 00:13:40.504 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:40.504 "subtype": "NVMe", 00:13:40.504 "listen_addresses": [ 00:13:40.504 { 00:13:40.504 "trtype": "VFIOUSER", 00:13:40.504 "adrfam": "IPv4", 00:13:40.504 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:40.504 "trsvcid": "0" 00:13:40.504 } 00:13:40.504 ], 00:13:40.504 "allow_any_host": true, 00:13:40.504 "hosts": [], 00:13:40.504 "serial_number": "SPDK1", 00:13:40.504 "model_number": "SPDK bdev Controller", 00:13:40.504 "max_namespaces": 32, 00:13:40.504 "min_cntlid": 1, 00:13:40.504 "max_cntlid": 65519, 00:13:40.504 "namespaces": [ 00:13:40.504 { 00:13:40.504 "nsid": 1, 00:13:40.504 "bdev_name": "Malloc1", 00:13:40.504 "name": "Malloc1", 00:13:40.504 "nguid": "7D1FAB5E568342FEBAEB509DB97845FC", 00:13:40.504 "uuid": "7d1fab5e-5683-42fe-baeb-509db97845fc" 00:13:40.504 }, 00:13:40.504 { 00:13:40.504 "nsid": 2, 00:13:40.504 "bdev_name": "Malloc3", 00:13:40.504 "name": "Malloc3", 00:13:40.504 "nguid": "9921EB8A6D124A4F9F1ECAB41700192F", 00:13:40.504 "uuid": "9921eb8a-6d12-4a4f-9f1e-cab41700192f" 00:13:40.504 } 00:13:40.504 ] 00:13:40.504 }, 00:13:40.504 { 00:13:40.504 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:40.504 "subtype": "NVMe", 00:13:40.504 "listen_addresses": [ 00:13:40.504 { 00:13:40.504 "trtype": "VFIOUSER", 00:13:40.504 "adrfam": "IPv4", 00:13:40.504 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:40.504 "trsvcid": "0" 00:13:40.504 } 00:13:40.504 ], 00:13:40.504 "allow_any_host": true, 00:13:40.504 "hosts": [], 00:13:40.504 "serial_number": "SPDK2", 00:13:40.504 "model_number": "SPDK bdev Controller", 00:13:40.504 "max_namespaces": 32, 00:13:40.504 "min_cntlid": 1, 00:13:40.504 "max_cntlid": 65519, 00:13:40.504 "namespaces": [ 00:13:40.504 { 00:13:40.504 "nsid": 1, 00:13:40.504 "bdev_name": "Malloc2", 00:13:40.504 "name": "Malloc2", 00:13:40.504 "nguid": "2AA2A7C5E07444588D138C88BDE92197", 00:13:40.504 "uuid": "2aa2a7c5-e074-4458-8d13-8c88bde92197" 00:13:40.504 } 00:13:40.504 ] 00:13:40.504 } 00:13:40.504 ] 00:13:40.504 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1340578 00:13:40.504 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:40.504 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:40.504 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:40.504 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:40.504 [2024-07-12 19:09:46.567342] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:13:40.504 [2024-07-12 19:09:46.567382] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340798 ] 00:13:40.504 EAL: No free 2048 kB hugepages reported on node 1 00:13:40.504 [2024-07-12 19:09:46.598639] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:40.504 [2024-07-12 19:09:46.607374] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:40.504 [2024-07-12 19:09:46.607394] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f73d0fe6000 00:13:40.504 [2024-07-12 19:09:46.608370] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.504 [2024-07-12 19:09:46.609378] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.504 [2024-07-12 19:09:46.610384] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.504 [2024-07-12 19:09:46.611387] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:40.504 [2024-07-12 19:09:46.612389] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:40.504 [2024-07-12 19:09:46.613393] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.504 [2024-07-12 19:09:46.614399] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:40.504 [2024-07-12 19:09:46.615413] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:40.504 [2024-07-12 19:09:46.616416] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:40.504 [2024-07-12 19:09:46.616426] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f73d0fdb000 00:13:40.504 [2024-07-12 19:09:46.617749] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:40.767 [2024-07-12 19:09:46.638277] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:40.767 [2024-07-12 19:09:46.638299] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:40.767 [2024-07-12 19:09:46.640363] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:40.767 [2024-07-12 19:09:46.640405] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:40.767 [2024-07-12 19:09:46.640484] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:40.767 [2024-07-12 19:09:46.640499] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:40.767 [2024-07-12 19:09:46.640505] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:40.767 [2024-07-12 19:09:46.641366] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:40.767 [2024-07-12 19:09:46.641376] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:40.767 [2024-07-12 19:09:46.641383] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:40.767 [2024-07-12 19:09:46.642374] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:40.767 [2024-07-12 19:09:46.642384] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:40.768 [2024-07-12 19:09:46.642391] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:40.768 [2024-07-12 19:09:46.643383] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:40.768 [2024-07-12 19:09:46.643392] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:40.768 [2024-07-12 19:09:46.644387] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:40.768 [2024-07-12 19:09:46.644395] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:40.768 [2024-07-12 19:09:46.644400] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:40.768 [2024-07-12 19:09:46.644407] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:40.768 [2024-07-12 19:09:46.644515] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:40.768 [2024-07-12 19:09:46.644520] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:40.768 [2024-07-12 19:09:46.644525] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:40.768 [2024-07-12 19:09:46.645394] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:40.768 [2024-07-12 19:09:46.646395] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:40.768 [2024-07-12 19:09:46.647401] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:40.768 [2024-07-12 19:09:46.648402] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:40.768 [2024-07-12 19:09:46.648440] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:40.768 [2024-07-12 19:09:46.649408] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:40.768 [2024-07-12 19:09:46.649417] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:40.768 [2024-07-12 19:09:46.649422] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.649443] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:40.768 [2024-07-12 19:09:46.649454] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.649467] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.768 [2024-07-12 19:09:46.649472] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.768 [2024-07-12 19:09:46.649484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.656132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.656143] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:40.768 [2024-07-12 19:09:46.656151] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:40.768 [2024-07-12 19:09:46.656156] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:40.768 [2024-07-12 19:09:46.656160] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:40.768 [2024-07-12 19:09:46.656165] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:40.768 [2024-07-12 19:09:46.656170] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:40.768 [2024-07-12 19:09:46.656175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.656182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.656192] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.664128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.664143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.768 [2024-07-12 19:09:46.664151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.768 [2024-07-12 19:09:46.664160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.768 [2024-07-12 19:09:46.664168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:40.768 [2024-07-12 19:09:46.664172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.664180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.664189] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.672128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.672135] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:40.768 [2024-07-12 19:09:46.672140] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.672147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.672152] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.672161] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.680128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.680193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.680201] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.680208] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:40.768 [2024-07-12 19:09:46.680213] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:40.768 [2024-07-12 19:09:46.680219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.688128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.688139] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:40.768 [2024-07-12 19:09:46.688147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.688154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.688161] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.768 [2024-07-12 19:09:46.688168] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.768 [2024-07-12 19:09:46.688174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.696127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.696141] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.696148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.696156] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:40.768 [2024-07-12 19:09:46.696160] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.768 [2024-07-12 19:09:46.696166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.704127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.704136] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.704143] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.704150] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.704156] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.704161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.704166] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.704170] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:40.768 [2024-07-12 19:09:46.704175] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:40.768 [2024-07-12 19:09:46.704180] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:40.768 [2024-07-12 19:09:46.704196] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.712128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.712142] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.720127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.720140] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.728126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:40.768 [2024-07-12 19:09:46.728139] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:40.768 [2024-07-12 19:09:46.736128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:40.769 [2024-07-12 19:09:46.736144] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:40.769 [2024-07-12 19:09:46.736148] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:40.769 [2024-07-12 19:09:46.736152] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:40.769 [2024-07-12 19:09:46.736156] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:40.769 [2024-07-12 19:09:46.736162] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:40.769 [2024-07-12 19:09:46.736169] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:40.769 [2024-07-12 19:09:46.736174] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:40.769 [2024-07-12 19:09:46.736180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:40.769 [2024-07-12 19:09:46.736187] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:40.769 [2024-07-12 19:09:46.736191] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:40.769 [2024-07-12 19:09:46.736197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:40.769 [2024-07-12 19:09:46.736205] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:40.769 [2024-07-12 19:09:46.736209] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:40.769 [2024-07-12 19:09:46.736215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:40.769 [2024-07-12 19:09:46.744127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:40.769 [2024-07-12 19:09:46.744141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:40.769 [2024-07-12 19:09:46.744151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:40.769 [2024-07-12 19:09:46.744158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:40.769 ===================================================== 00:13:40.769 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:40.769 ===================================================== 00:13:40.769 Controller Capabilities/Features 00:13:40.769 ================================ 00:13:40.769 Vendor ID: 4e58 00:13:40.769 Subsystem Vendor ID: 4e58 00:13:40.769 Serial Number: SPDK2 00:13:40.769 Model Number: SPDK bdev Controller 00:13:40.769 Firmware Version: 24.09 00:13:40.769 Recommended Arb Burst: 6 00:13:40.769 IEEE OUI Identifier: 8d 6b 50 00:13:40.769 Multi-path I/O 00:13:40.769 May have multiple subsystem ports: Yes 00:13:40.769 May have multiple controllers: Yes 00:13:40.769 Associated with SR-IOV VF: No 00:13:40.769 Max Data Transfer Size: 131072 00:13:40.769 Max Number of Namespaces: 32 00:13:40.769 Max Number of I/O Queues: 127 00:13:40.769 NVMe Specification Version (VS): 1.3 00:13:40.769 NVMe Specification Version (Identify): 1.3 00:13:40.769 Maximum Queue Entries: 256 00:13:40.769 Contiguous Queues Required: Yes 00:13:40.769 Arbitration Mechanisms Supported 00:13:40.769 Weighted Round Robin: Not Supported 00:13:40.769 Vendor Specific: Not Supported 00:13:40.769 Reset Timeout: 15000 ms 00:13:40.769 Doorbell Stride: 4 bytes 00:13:40.769 NVM Subsystem Reset: Not Supported 00:13:40.769 Command Sets Supported 00:13:40.769 NVM Command Set: Supported 00:13:40.769 Boot Partition: Not Supported 00:13:40.769 Memory Page Size Minimum: 4096 bytes 00:13:40.769 Memory Page Size Maximum: 4096 bytes 00:13:40.769 Persistent Memory Region: Not Supported 00:13:40.769 Optional Asynchronous Events Supported 00:13:40.769 Namespace Attribute Notices: Supported 00:13:40.769 Firmware Activation Notices: Not Supported 00:13:40.769 ANA Change Notices: Not Supported 00:13:40.769 PLE Aggregate Log Change Notices: Not Supported 00:13:40.769 LBA Status Info Alert Notices: Not Supported 00:13:40.769 EGE Aggregate Log Change Notices: Not Supported 00:13:40.769 Normal NVM Subsystem Shutdown event: Not Supported 00:13:40.769 Zone Descriptor Change Notices: Not Supported 00:13:40.769 Discovery Log Change Notices: Not Supported 00:13:40.769 Controller Attributes 00:13:40.769 128-bit Host Identifier: Supported 00:13:40.769 Non-Operational Permissive Mode: Not Supported 00:13:40.769 NVM Sets: Not Supported 00:13:40.769 Read Recovery Levels: Not Supported 00:13:40.769 Endurance Groups: Not Supported 00:13:40.769 Predictable Latency Mode: Not Supported 00:13:40.769 Traffic Based Keep ALive: Not Supported 00:13:40.769 Namespace Granularity: Not Supported 00:13:40.769 SQ Associations: Not Supported 00:13:40.769 UUID List: Not Supported 00:13:40.769 Multi-Domain Subsystem: Not Supported 00:13:40.769 Fixed Capacity Management: Not Supported 00:13:40.769 Variable Capacity Management: Not Supported 00:13:40.769 Delete Endurance Group: Not Supported 00:13:40.769 Delete NVM Set: Not Supported 00:13:40.769 Extended LBA Formats Supported: Not Supported 00:13:40.769 Flexible Data Placement Supported: Not Supported 00:13:40.769 00:13:40.769 Controller Memory Buffer Support 00:13:40.769 ================================ 00:13:40.769 Supported: No 00:13:40.769 00:13:40.769 Persistent Memory Region Support 00:13:40.769 ================================ 00:13:40.769 Supported: No 00:13:40.769 00:13:40.769 Admin Command Set Attributes 00:13:40.769 ============================ 00:13:40.769 Security Send/Receive: Not Supported 00:13:40.769 Format NVM: Not Supported 00:13:40.769 Firmware Activate/Download: Not Supported 00:13:40.769 Namespace Management: Not Supported 00:13:40.769 Device Self-Test: Not Supported 00:13:40.769 Directives: Not Supported 00:13:40.769 NVMe-MI: Not Supported 00:13:40.769 Virtualization Management: Not Supported 00:13:40.769 Doorbell Buffer Config: Not Supported 00:13:40.769 Get LBA Status Capability: Not Supported 00:13:40.769 Command & Feature Lockdown Capability: Not Supported 00:13:40.769 Abort Command Limit: 4 00:13:40.769 Async Event Request Limit: 4 00:13:40.769 Number of Firmware Slots: N/A 00:13:40.769 Firmware Slot 1 Read-Only: N/A 00:13:40.769 Firmware Activation Without Reset: N/A 00:13:40.769 Multiple Update Detection Support: N/A 00:13:40.769 Firmware Update Granularity: No Information Provided 00:13:40.769 Per-Namespace SMART Log: No 00:13:40.769 Asymmetric Namespace Access Log Page: Not Supported 00:13:40.769 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:40.769 Command Effects Log Page: Supported 00:13:40.769 Get Log Page Extended Data: Supported 00:13:40.769 Telemetry Log Pages: Not Supported 00:13:40.769 Persistent Event Log Pages: Not Supported 00:13:40.769 Supported Log Pages Log Page: May Support 00:13:40.769 Commands Supported & Effects Log Page: Not Supported 00:13:40.769 Feature Identifiers & Effects Log Page:May Support 00:13:40.769 NVMe-MI Commands & Effects Log Page: May Support 00:13:40.769 Data Area 4 for Telemetry Log: Not Supported 00:13:40.769 Error Log Page Entries Supported: 128 00:13:40.769 Keep Alive: Supported 00:13:40.769 Keep Alive Granularity: 10000 ms 00:13:40.769 00:13:40.769 NVM Command Set Attributes 00:13:40.769 ========================== 00:13:40.769 Submission Queue Entry Size 00:13:40.769 Max: 64 00:13:40.769 Min: 64 00:13:40.769 Completion Queue Entry Size 00:13:40.769 Max: 16 00:13:40.769 Min: 16 00:13:40.769 Number of Namespaces: 32 00:13:40.769 Compare Command: Supported 00:13:40.769 Write Uncorrectable Command: Not Supported 00:13:40.769 Dataset Management Command: Supported 00:13:40.769 Write Zeroes Command: Supported 00:13:40.769 Set Features Save Field: Not Supported 00:13:40.769 Reservations: Not Supported 00:13:40.769 Timestamp: Not Supported 00:13:40.769 Copy: Supported 00:13:40.769 Volatile Write Cache: Present 00:13:40.769 Atomic Write Unit (Normal): 1 00:13:40.769 Atomic Write Unit (PFail): 1 00:13:40.769 Atomic Compare & Write Unit: 1 00:13:40.769 Fused Compare & Write: Supported 00:13:40.769 Scatter-Gather List 00:13:40.769 SGL Command Set: Supported (Dword aligned) 00:13:40.769 SGL Keyed: Not Supported 00:13:40.769 SGL Bit Bucket Descriptor: Not Supported 00:13:40.769 SGL Metadata Pointer: Not Supported 00:13:40.769 Oversized SGL: Not Supported 00:13:40.769 SGL Metadata Address: Not Supported 00:13:40.769 SGL Offset: Not Supported 00:13:40.769 Transport SGL Data Block: Not Supported 00:13:40.769 Replay Protected Memory Block: Not Supported 00:13:40.769 00:13:40.769 Firmware Slot Information 00:13:40.769 ========================= 00:13:40.769 Active slot: 1 00:13:40.769 Slot 1 Firmware Revision: 24.09 00:13:40.769 00:13:40.769 00:13:40.769 Commands Supported and Effects 00:13:40.769 ============================== 00:13:40.769 Admin Commands 00:13:40.769 -------------- 00:13:40.769 Get Log Page (02h): Supported 00:13:40.769 Identify (06h): Supported 00:13:40.769 Abort (08h): Supported 00:13:40.769 Set Features (09h): Supported 00:13:40.769 Get Features (0Ah): Supported 00:13:40.769 Asynchronous Event Request (0Ch): Supported 00:13:40.769 Keep Alive (18h): Supported 00:13:40.769 I/O Commands 00:13:40.769 ------------ 00:13:40.769 Flush (00h): Supported LBA-Change 00:13:40.769 Write (01h): Supported LBA-Change 00:13:40.769 Read (02h): Supported 00:13:40.769 Compare (05h): Supported 00:13:40.769 Write Zeroes (08h): Supported LBA-Change 00:13:40.769 Dataset Management (09h): Supported LBA-Change 00:13:40.769 Copy (19h): Supported LBA-Change 00:13:40.769 00:13:40.769 Error Log 00:13:40.769 ========= 00:13:40.770 00:13:40.770 Arbitration 00:13:40.770 =========== 00:13:40.770 Arbitration Burst: 1 00:13:40.770 00:13:40.770 Power Management 00:13:40.770 ================ 00:13:40.770 Number of Power States: 1 00:13:40.770 Current Power State: Power State #0 00:13:40.770 Power State #0: 00:13:40.770 Max Power: 0.00 W 00:13:40.770 Non-Operational State: Operational 00:13:40.770 Entry Latency: Not Reported 00:13:40.770 Exit Latency: Not Reported 00:13:40.770 Relative Read Throughput: 0 00:13:40.770 Relative Read Latency: 0 00:13:40.770 Relative Write Throughput: 0 00:13:40.770 Relative Write Latency: 0 00:13:40.770 Idle Power: Not Reported 00:13:40.770 Active Power: Not Reported 00:13:40.770 Non-Operational Permissive Mode: Not Supported 00:13:40.770 00:13:40.770 Health Information 00:13:40.770 ================== 00:13:40.770 Critical Warnings: 00:13:40.770 Available Spare Space: OK 00:13:40.770 Temperature: OK 00:13:40.770 Device Reliability: OK 00:13:40.770 Read Only: No 00:13:40.770 Volatile Memory Backup: OK 00:13:40.770 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:40.770 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:40.770 Available Spare: 0% 00:13:40.770 Available Sp[2024-07-12 19:09:46.744255] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:40.770 [2024-07-12 19:09:46.752127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:40.770 [2024-07-12 19:09:46.752161] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:40.770 [2024-07-12 19:09:46.752170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.770 [2024-07-12 19:09:46.752177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.770 [2024-07-12 19:09:46.752183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.770 [2024-07-12 19:09:46.752189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:40.770 [2024-07-12 19:09:46.752230] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:40.770 [2024-07-12 19:09:46.752240] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:40.770 [2024-07-12 19:09:46.753244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:40.770 [2024-07-12 19:09:46.753292] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:40.770 [2024-07-12 19:09:46.753298] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:40.770 [2024-07-12 19:09:46.754248] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:40.770 [2024-07-12 19:09:46.754260] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:40.770 [2024-07-12 19:09:46.754306] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:40.770 [2024-07-12 19:09:46.757128] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:40.770 are Threshold: 0% 00:13:40.770 Life Percentage Used: 0% 00:13:40.770 Data Units Read: 0 00:13:40.770 Data Units Written: 0 00:13:40.770 Host Read Commands: 0 00:13:40.770 Host Write Commands: 0 00:13:40.770 Controller Busy Time: 0 minutes 00:13:40.770 Power Cycles: 0 00:13:40.770 Power On Hours: 0 hours 00:13:40.770 Unsafe Shutdowns: 0 00:13:40.770 Unrecoverable Media Errors: 0 00:13:40.770 Lifetime Error Log Entries: 0 00:13:40.770 Warning Temperature Time: 0 minutes 00:13:40.770 Critical Temperature Time: 0 minutes 00:13:40.770 00:13:40.770 Number of Queues 00:13:40.770 ================ 00:13:40.770 Number of I/O Submission Queues: 127 00:13:40.770 Number of I/O Completion Queues: 127 00:13:40.770 00:13:40.770 Active Namespaces 00:13:40.770 ================= 00:13:40.770 Namespace ID:1 00:13:40.770 Error Recovery Timeout: Unlimited 00:13:40.770 Command Set Identifier: NVM (00h) 00:13:40.770 Deallocate: Supported 00:13:40.770 Deallocated/Unwritten Error: Not Supported 00:13:40.770 Deallocated Read Value: Unknown 00:13:40.770 Deallocate in Write Zeroes: Not Supported 00:13:40.770 Deallocated Guard Field: 0xFFFF 00:13:40.770 Flush: Supported 00:13:40.770 Reservation: Supported 00:13:40.770 Namespace Sharing Capabilities: Multiple Controllers 00:13:40.770 Size (in LBAs): 131072 (0GiB) 00:13:40.770 Capacity (in LBAs): 131072 (0GiB) 00:13:40.770 Utilization (in LBAs): 131072 (0GiB) 00:13:40.770 NGUID: 2AA2A7C5E07444588D138C88BDE92197 00:13:40.770 UUID: 2aa2a7c5-e074-4458-8d13-8c88bde92197 00:13:40.770 Thin Provisioning: Not Supported 00:13:40.770 Per-NS Atomic Units: Yes 00:13:40.770 Atomic Boundary Size (Normal): 0 00:13:40.770 Atomic Boundary Size (PFail): 0 00:13:40.770 Atomic Boundary Offset: 0 00:13:40.770 Maximum Single Source Range Length: 65535 00:13:40.770 Maximum Copy Length: 65535 00:13:40.770 Maximum Source Range Count: 1 00:13:40.770 NGUID/EUI64 Never Reused: No 00:13:40.770 Namespace Write Protected: No 00:13:40.770 Number of LBA Formats: 1 00:13:40.770 Current LBA Format: LBA Format #00 00:13:40.770 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:40.770 00:13:40.770 19:09:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:40.770 EAL: No free 2048 kB hugepages reported on node 1 00:13:41.031 [2024-07-12 19:09:46.940136] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:46.319 Initializing NVMe Controllers 00:13:46.319 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:46.319 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:46.319 Initialization complete. Launching workers. 00:13:46.319 ======================================================== 00:13:46.319 Latency(us) 00:13:46.319 Device Information : IOPS MiB/s Average min max 00:13:46.319 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 40004.00 156.27 3202.06 835.41 6819.95 00:13:46.319 ======================================================== 00:13:46.319 Total : 40004.00 156.27 3202.06 835.41 6819.95 00:13:46.319 00:13:46.319 [2024-07-12 19:09:52.048313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:46.319 19:09:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:46.319 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.319 [2024-07-12 19:09:52.227864] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:51.608 Initializing NVMe Controllers 00:13:51.608 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:51.608 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:51.608 Initialization complete. Launching workers. 00:13:51.608 ======================================================== 00:13:51.608 Latency(us) 00:13:51.608 Device Information : IOPS MiB/s Average min max 00:13:51.608 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35882.75 140.17 3566.86 1101.87 6718.72 00:13:51.608 ======================================================== 00:13:51.608 Total : 35882.75 140.17 3566.86 1101.87 6718.72 00:13:51.608 00:13:51.608 [2024-07-12 19:09:57.250052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:51.608 19:09:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:51.608 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.608 [2024-07-12 19:09:57.439183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.894 [2024-07-12 19:10:02.585206] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.894 Initializing NVMe Controllers 00:13:56.894 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:56.894 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:56.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:56.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:56.894 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:56.894 Initialization complete. Launching workers. 00:13:56.894 Starting thread on core 2 00:13:56.894 Starting thread on core 3 00:13:56.894 Starting thread on core 1 00:13:56.894 19:10:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:56.894 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.894 [2024-07-12 19:10:02.848334] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.192 [2024-07-12 19:10:05.925077] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.192 Initializing NVMe Controllers 00:14:00.192 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.192 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.192 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:00.192 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:00.192 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:00.192 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:00.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:00.192 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:00.192 Initialization complete. Launching workers. 00:14:00.192 Starting thread on core 1 with urgent priority queue 00:14:00.192 Starting thread on core 2 with urgent priority queue 00:14:00.192 Starting thread on core 3 with urgent priority queue 00:14:00.192 Starting thread on core 0 with urgent priority queue 00:14:00.192 SPDK bdev Controller (SPDK2 ) core 0: 9291.00 IO/s 10.76 secs/100000 ios 00:14:00.192 SPDK bdev Controller (SPDK2 ) core 1: 8708.67 IO/s 11.48 secs/100000 ios 00:14:00.192 SPDK bdev Controller (SPDK2 ) core 2: 9439.67 IO/s 10.59 secs/100000 ios 00:14:00.192 SPDK bdev Controller (SPDK2 ) core 3: 6525.67 IO/s 15.32 secs/100000 ios 00:14:00.192 ======================================================== 00:14:00.192 00:14:00.192 19:10:05 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:00.192 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.192 [2024-07-12 19:10:06.191571] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:00.192 Initializing NVMe Controllers 00:14:00.192 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.192 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:00.192 Namespace ID: 1 size: 0GB 00:14:00.192 Initialization complete. 00:14:00.192 INFO: using host memory buffer for IO 00:14:00.192 Hello world! 00:14:00.192 [2024-07-12 19:10:06.201638] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:00.192 19:10:06 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:00.192 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.452 [2024-07-12 19:10:06.458100] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:01.834 Initializing NVMe Controllers 00:14:01.834 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:01.834 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:01.834 Initialization complete. Launching workers. 00:14:01.834 submit (in ns) avg, min, max = 7971.0, 3904.2, 4000290.0 00:14:01.834 complete (in ns) avg, min, max = 17710.2, 2409.2, 3998530.8 00:14:01.834 00:14:01.834 Submit histogram 00:14:01.834 ================ 00:14:01.834 Range in us Cumulative Count 00:14:01.834 3.893 - 3.920: 0.6681% ( 130) 00:14:01.834 3.920 - 3.947: 4.5485% ( 755) 00:14:01.834 3.947 - 3.973: 12.8540% ( 1616) 00:14:01.834 3.973 - 4.000: 23.3900% ( 2050) 00:14:01.834 4.000 - 4.027: 33.7719% ( 2020) 00:14:01.834 4.027 - 4.053: 43.8865% ( 1968) 00:14:01.834 4.053 - 4.080: 58.2412% ( 2793) 00:14:01.834 4.080 - 4.107: 74.0813% ( 3082) 00:14:01.834 4.107 - 4.133: 87.3208% ( 2576) 00:14:01.834 4.133 - 4.160: 95.0558% ( 1505) 00:14:01.834 4.160 - 4.187: 98.0521% ( 583) 00:14:01.834 4.187 - 4.213: 99.0338% ( 191) 00:14:01.834 4.213 - 4.240: 99.4244% ( 76) 00:14:01.834 4.240 - 4.267: 99.4912% ( 13) 00:14:01.834 4.267 - 4.293: 99.5169% ( 5) 00:14:01.834 4.293 - 4.320: 99.5220% ( 1) 00:14:01.834 4.320 - 4.347: 99.5272% ( 1) 00:14:01.834 4.533 - 4.560: 99.5323% ( 1) 00:14:01.834 5.093 - 5.120: 99.5374% ( 1) 00:14:01.834 5.360 - 5.387: 99.5426% ( 1) 00:14:01.834 5.627 - 5.653: 99.5477% ( 1) 00:14:01.834 5.760 - 5.787: 99.5529% ( 1) 00:14:01.834 5.840 - 5.867: 99.5580% ( 1) 00:14:01.834 5.920 - 5.947: 99.5631% ( 1) 00:14:01.834 5.973 - 6.000: 99.5683% ( 1) 00:14:01.834 6.027 - 6.053: 99.5734% ( 1) 00:14:01.834 6.053 - 6.080: 99.5786% ( 1) 00:14:01.834 6.080 - 6.107: 99.5837% ( 1) 00:14:01.834 6.107 - 6.133: 99.5888% ( 1) 00:14:01.834 6.133 - 6.160: 99.5991% ( 2) 00:14:01.834 6.160 - 6.187: 99.6043% ( 1) 00:14:01.834 6.187 - 6.213: 99.6094% ( 1) 00:14:01.834 6.267 - 6.293: 99.6145% ( 1) 00:14:01.834 6.293 - 6.320: 99.6197% ( 1) 00:14:01.834 6.320 - 6.347: 99.6248% ( 1) 00:14:01.834 6.373 - 6.400: 99.6300% ( 1) 00:14:01.834 6.400 - 6.427: 99.6351% ( 1) 00:14:01.834 6.427 - 6.453: 99.6454% ( 2) 00:14:01.834 6.453 - 6.480: 99.6505% ( 1) 00:14:01.835 6.480 - 6.507: 99.6557% ( 1) 00:14:01.835 6.507 - 6.533: 99.6659% ( 2) 00:14:01.835 6.533 - 6.560: 99.6762% ( 2) 00:14:01.835 6.560 - 6.587: 99.6813% ( 1) 00:14:01.835 6.587 - 6.613: 99.6865% ( 1) 00:14:01.835 6.613 - 6.640: 99.7019% ( 3) 00:14:01.835 6.667 - 6.693: 99.7173% ( 3) 00:14:01.835 6.693 - 6.720: 99.7225% ( 1) 00:14:01.835 6.720 - 6.747: 99.7327% ( 2) 00:14:01.835 6.773 - 6.800: 99.7482% ( 3) 00:14:01.835 6.827 - 6.880: 99.7533% ( 1) 00:14:01.835 6.933 - 6.987: 99.7636% ( 2) 00:14:01.835 7.040 - 7.093: 99.7841% ( 4) 00:14:01.835 7.147 - 7.200: 99.7996% ( 3) 00:14:01.835 7.253 - 7.307: 99.8201% ( 4) 00:14:01.835 7.307 - 7.360: 99.8304% ( 2) 00:14:01.835 7.413 - 7.467: 99.8407% ( 2) 00:14:01.835 7.467 - 7.520: 99.8510% ( 2) 00:14:01.835 7.520 - 7.573: 99.8561% ( 1) 00:14:01.835 7.733 - 7.787: 99.8612% ( 1) 00:14:01.835 7.787 - 7.840: 99.8664% ( 1) 00:14:01.835 8.320 - 8.373: 99.8715% ( 1) 00:14:01.835 8.533 - 8.587: 99.8767% ( 1) 00:14:01.835 8.587 - 8.640: 99.8818% ( 1) 00:14:01.835 11.307 - 11.360: 99.8869% ( 1) 00:14:01.835 14.293 - 14.400: 99.8921% ( 1) 00:14:01.835 15.040 - 15.147: 99.8972% ( 1) 00:14:01.835 36.693 - 36.907: 99.9023% ( 1) 00:14:01.835 3986.773 - 4014.080: 100.0000% ( 19) 00:14:01.835 00:14:01.835 Complete histogram 00:14:01.835 ================== 00:14:01.835 Range in us Cumulative Count 00:14:01.835 2.400 - 2.413: 0.0051% ( 1) 00:14:01.835 2.413 - 2.427: 0.0822% ( 15) 00:14:01.835 2.427 - 2.440: 0.8943% ( 158) 00:14:01.835 2.440 - 2.453: 0.9765% ( 16) 00:14:01.835 2.453 - 2.467: 1.2078% ( 45) 00:14:01.835 2.467 - 2.480: 4.2607% ( 594) 00:14:01.835 2.480 - 2.493: 44.7808% ( 7884) 00:14:01.835 2.493 - 2.507: 55.9747% ( 2178) 00:14:01.835 2.507 - 2.520: 70.7509% ( 2875) 00:14:01.835 2.520 - [2024-07-12 19:10:07.553938] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:01.835 2.533: 79.4007% ( 1683) 00:14:01.835 2.533 - 2.547: 81.9654% ( 499) 00:14:01.835 2.547 - 2.560: 85.3780% ( 664) 00:14:01.835 2.560 - 2.573: 90.5176% ( 1000) 00:14:01.835 2.573 - 2.587: 94.5983% ( 794) 00:14:01.835 2.587 - 2.600: 97.3943% ( 544) 00:14:01.835 2.600 - 2.613: 98.7819% ( 270) 00:14:01.835 2.613 - 2.627: 99.3164% ( 104) 00:14:01.835 2.627 - 2.640: 99.4038% ( 17) 00:14:01.835 2.640 - 2.653: 99.4244% ( 4) 00:14:01.835 2.653 - 2.667: 99.4347% ( 2) 00:14:01.835 4.773 - 4.800: 99.4398% ( 1) 00:14:01.835 4.800 - 4.827: 99.4501% ( 2) 00:14:01.835 4.880 - 4.907: 99.4603% ( 2) 00:14:01.835 4.907 - 4.933: 99.4706% ( 2) 00:14:01.835 4.933 - 4.960: 99.4860% ( 3) 00:14:01.835 4.960 - 4.987: 99.4912% ( 1) 00:14:01.835 5.013 - 5.040: 99.4963% ( 1) 00:14:01.835 5.147 - 5.173: 99.5015% ( 1) 00:14:01.835 5.173 - 5.200: 99.5066% ( 1) 00:14:01.835 5.280 - 5.307: 99.5117% ( 1) 00:14:01.835 5.387 - 5.413: 99.5169% ( 1) 00:14:01.835 5.413 - 5.440: 99.5220% ( 1) 00:14:01.835 5.440 - 5.467: 99.5272% ( 1) 00:14:01.835 5.520 - 5.547: 99.5323% ( 1) 00:14:01.835 5.680 - 5.707: 99.5374% ( 1) 00:14:01.835 5.707 - 5.733: 99.5426% ( 1) 00:14:01.835 5.787 - 5.813: 99.5477% ( 1) 00:14:01.835 5.840 - 5.867: 99.5529% ( 1) 00:14:01.835 5.867 - 5.893: 99.5580% ( 1) 00:14:01.835 5.893 - 5.920: 99.5631% ( 1) 00:14:01.835 5.920 - 5.947: 99.5683% ( 1) 00:14:01.835 5.973 - 6.000: 99.5786% ( 2) 00:14:01.835 6.187 - 6.213: 99.5837% ( 1) 00:14:01.835 6.240 - 6.267: 99.5888% ( 1) 00:14:01.835 6.320 - 6.347: 99.5940% ( 1) 00:14:01.835 6.347 - 6.373: 99.5991% ( 1) 00:14:01.835 10.347 - 10.400: 99.6043% ( 1) 00:14:01.835 10.667 - 10.720: 99.6094% ( 1) 00:14:01.835 10.933 - 10.987: 99.6145% ( 1) 00:14:01.835 11.093 - 11.147: 99.6197% ( 1) 00:14:01.835 3986.773 - 4014.080: 100.0000% ( 74) 00:14:01.835 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:01.835 [ 00:14:01.835 { 00:14:01.835 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:01.835 "subtype": "Discovery", 00:14:01.835 "listen_addresses": [], 00:14:01.835 "allow_any_host": true, 00:14:01.835 "hosts": [] 00:14:01.835 }, 00:14:01.835 { 00:14:01.835 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:01.835 "subtype": "NVMe", 00:14:01.835 "listen_addresses": [ 00:14:01.835 { 00:14:01.835 "trtype": "VFIOUSER", 00:14:01.835 "adrfam": "IPv4", 00:14:01.835 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:01.835 "trsvcid": "0" 00:14:01.835 } 00:14:01.835 ], 00:14:01.835 "allow_any_host": true, 00:14:01.835 "hosts": [], 00:14:01.835 "serial_number": "SPDK1", 00:14:01.835 "model_number": "SPDK bdev Controller", 00:14:01.835 "max_namespaces": 32, 00:14:01.835 "min_cntlid": 1, 00:14:01.835 "max_cntlid": 65519, 00:14:01.835 "namespaces": [ 00:14:01.835 { 00:14:01.835 "nsid": 1, 00:14:01.835 "bdev_name": "Malloc1", 00:14:01.835 "name": "Malloc1", 00:14:01.835 "nguid": "7D1FAB5E568342FEBAEB509DB97845FC", 00:14:01.835 "uuid": "7d1fab5e-5683-42fe-baeb-509db97845fc" 00:14:01.835 }, 00:14:01.835 { 00:14:01.835 "nsid": 2, 00:14:01.835 "bdev_name": "Malloc3", 00:14:01.835 "name": "Malloc3", 00:14:01.835 "nguid": "9921EB8A6D124A4F9F1ECAB41700192F", 00:14:01.835 "uuid": "9921eb8a-6d12-4a4f-9f1e-cab41700192f" 00:14:01.835 } 00:14:01.835 ] 00:14:01.835 }, 00:14:01.835 { 00:14:01.835 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:01.835 "subtype": "NVMe", 00:14:01.835 "listen_addresses": [ 00:14:01.835 { 00:14:01.835 "trtype": "VFIOUSER", 00:14:01.835 "adrfam": "IPv4", 00:14:01.835 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:01.835 "trsvcid": "0" 00:14:01.835 } 00:14:01.835 ], 00:14:01.835 "allow_any_host": true, 00:14:01.835 "hosts": [], 00:14:01.835 "serial_number": "SPDK2", 00:14:01.835 "model_number": "SPDK bdev Controller", 00:14:01.835 "max_namespaces": 32, 00:14:01.835 "min_cntlid": 1, 00:14:01.835 "max_cntlid": 65519, 00:14:01.835 "namespaces": [ 00:14:01.835 { 00:14:01.835 "nsid": 1, 00:14:01.835 "bdev_name": "Malloc2", 00:14:01.835 "name": "Malloc2", 00:14:01.835 "nguid": "2AA2A7C5E07444588D138C88BDE92197", 00:14:01.835 "uuid": "2aa2a7c5-e074-4458-8d13-8c88bde92197" 00:14:01.835 } 00:14:01.835 ] 00:14:01.835 } 00:14:01.835 ] 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1344829 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:01.835 EAL: No free 2048 kB hugepages reported on node 1 00:14:01.835 [2024-07-12 19:10:07.927523] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:01.835 Malloc4 00:14:01.835 19:10:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:02.096 [2024-07-12 19:10:08.097615] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:02.096 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:02.096 Asynchronous Event Request test 00:14:02.096 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:02.096 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:02.096 Registering asynchronous event callbacks... 00:14:02.096 Starting namespace attribute notice tests for all controllers... 00:14:02.096 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:02.096 aer_cb - Changed Namespace 00:14:02.096 Cleaning up... 00:14:02.357 [ 00:14:02.357 { 00:14:02.357 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:02.357 "subtype": "Discovery", 00:14:02.357 "listen_addresses": [], 00:14:02.357 "allow_any_host": true, 00:14:02.357 "hosts": [] 00:14:02.357 }, 00:14:02.357 { 00:14:02.357 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:02.357 "subtype": "NVMe", 00:14:02.357 "listen_addresses": [ 00:14:02.357 { 00:14:02.357 "trtype": "VFIOUSER", 00:14:02.357 "adrfam": "IPv4", 00:14:02.357 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:02.357 "trsvcid": "0" 00:14:02.357 } 00:14:02.357 ], 00:14:02.357 "allow_any_host": true, 00:14:02.357 "hosts": [], 00:14:02.357 "serial_number": "SPDK1", 00:14:02.357 "model_number": "SPDK bdev Controller", 00:14:02.357 "max_namespaces": 32, 00:14:02.357 "min_cntlid": 1, 00:14:02.357 "max_cntlid": 65519, 00:14:02.357 "namespaces": [ 00:14:02.357 { 00:14:02.357 "nsid": 1, 00:14:02.357 "bdev_name": "Malloc1", 00:14:02.357 "name": "Malloc1", 00:14:02.357 "nguid": "7D1FAB5E568342FEBAEB509DB97845FC", 00:14:02.357 "uuid": "7d1fab5e-5683-42fe-baeb-509db97845fc" 00:14:02.357 }, 00:14:02.357 { 00:14:02.357 "nsid": 2, 00:14:02.357 "bdev_name": "Malloc3", 00:14:02.357 "name": "Malloc3", 00:14:02.357 "nguid": "9921EB8A6D124A4F9F1ECAB41700192F", 00:14:02.357 "uuid": "9921eb8a-6d12-4a4f-9f1e-cab41700192f" 00:14:02.357 } 00:14:02.357 ] 00:14:02.357 }, 00:14:02.357 { 00:14:02.357 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:02.357 "subtype": "NVMe", 00:14:02.357 "listen_addresses": [ 00:14:02.357 { 00:14:02.357 "trtype": "VFIOUSER", 00:14:02.357 "adrfam": "IPv4", 00:14:02.357 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:02.357 "trsvcid": "0" 00:14:02.357 } 00:14:02.357 ], 00:14:02.357 "allow_any_host": true, 00:14:02.357 "hosts": [], 00:14:02.358 "serial_number": "SPDK2", 00:14:02.358 "model_number": "SPDK bdev Controller", 00:14:02.358 "max_namespaces": 32, 00:14:02.358 "min_cntlid": 1, 00:14:02.358 "max_cntlid": 65519, 00:14:02.358 "namespaces": [ 00:14:02.358 { 00:14:02.358 "nsid": 1, 00:14:02.358 "bdev_name": "Malloc2", 00:14:02.358 "name": "Malloc2", 00:14:02.358 "nguid": "2AA2A7C5E07444588D138C88BDE92197", 00:14:02.358 "uuid": "2aa2a7c5-e074-4458-8d13-8c88bde92197" 00:14:02.358 }, 00:14:02.358 { 00:14:02.358 "nsid": 2, 00:14:02.358 "bdev_name": "Malloc4", 00:14:02.358 "name": "Malloc4", 00:14:02.358 "nguid": "7275F34FA0974296A77DA9B63FFDB126", 00:14:02.358 "uuid": "7275f34f-a097-4296-a77d-a9b63ffdb126" 00:14:02.358 } 00:14:02.358 ] 00:14:02.358 } 00:14:02.358 ] 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1344829 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1335736 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1335736 ']' 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1335736 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1335736 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1335736' 00:14:02.358 killing process with pid 1335736 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1335736 00:14:02.358 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1335736 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1345117 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1345117' 00:14:02.618 Process pid: 1345117 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1345117 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1345117 ']' 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:02.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:02.618 19:10:08 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:02.618 [2024-07-12 19:10:08.583926] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:02.618 [2024-07-12 19:10:08.584862] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:14:02.618 [2024-07-12 19:10:08.584903] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:02.618 EAL: No free 2048 kB hugepages reported on node 1 00:14:02.618 [2024-07-12 19:10:08.645511] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:02.618 [2024-07-12 19:10:08.712454] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:02.618 [2024-07-12 19:10:08.712489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:02.618 [2024-07-12 19:10:08.712496] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:02.618 [2024-07-12 19:10:08.712503] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:02.619 [2024-07-12 19:10:08.712508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:02.619 [2024-07-12 19:10:08.712645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:02.619 [2024-07-12 19:10:08.712770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:02.619 [2024-07-12 19:10:08.712927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.619 [2024-07-12 19:10:08.712928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:02.879 [2024-07-12 19:10:08.779435] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:02.879 [2024-07-12 19:10:08.779550] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:02.879 [2024-07-12 19:10:08.780542] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:02.879 [2024-07-12 19:10:08.780891] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:02.879 [2024-07-12 19:10:08.780993] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:03.450 19:10:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:03.450 19:10:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:03.450 19:10:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:04.391 19:10:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:04.391 19:10:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:04.671 19:10:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:04.671 19:10:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:04.671 19:10:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:04.671 19:10:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:04.671 Malloc1 00:14:04.671 19:10:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:04.931 19:10:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:04.931 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:05.192 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:05.192 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:05.192 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:05.453 Malloc2 00:14:05.453 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:05.453 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:05.714 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1345117 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1345117 ']' 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1345117 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1345117 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1345117' 00:14:05.975 killing process with pid 1345117 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1345117 00:14:05.975 19:10:11 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1345117 00:14:05.975 19:10:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:05.975 19:10:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:05.975 00:14:05.975 real 0m50.529s 00:14:05.975 user 3m20.381s 00:14:05.975 sys 0m2.954s 00:14:05.975 19:10:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:05.975 19:10:12 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:05.975 ************************************ 00:14:05.975 END TEST nvmf_vfio_user 00:14:05.975 ************************************ 00:14:06.237 19:10:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:06.237 19:10:12 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:06.237 19:10:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:06.237 19:10:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.237 19:10:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:06.237 ************************************ 00:14:06.237 START TEST nvmf_vfio_user_nvme_compliance 00:14:06.237 ************************************ 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:06.237 * Looking for test storage... 00:14:06.237 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.237 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1345910 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1345910' 00:14:06.238 Process pid: 1345910 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1345910 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1345910 ']' 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.238 19:10:12 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.238 [2024-07-12 19:10:12.354346] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:14:06.238 [2024-07-12 19:10:12.354397] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.498 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.498 [2024-07-12 19:10:12.416763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.498 [2024-07-12 19:10:12.483305] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.498 [2024-07-12 19:10:12.483340] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.498 [2024-07-12 19:10:12.483348] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.498 [2024-07-12 19:10:12.483354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.498 [2024-07-12 19:10:12.483359] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.498 [2024-07-12 19:10:12.483502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.498 [2024-07-12 19:10:12.483615] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.498 [2024-07-12 19:10:12.483617] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.083 19:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.083 19:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:14:07.083 19:10:13 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.025 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.286 malloc0 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.286 19:10:14 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:08.286 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.286 00:14:08.286 00:14:08.286 CUnit - A unit testing framework for C - Version 2.1-3 00:14:08.286 http://cunit.sourceforge.net/ 00:14:08.286 00:14:08.286 00:14:08.286 Suite: nvme_compliance 00:14:08.286 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-12 19:10:14.378584] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.286 [2024-07-12 19:10:14.379946] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:08.286 [2024-07-12 19:10:14.379957] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:08.286 [2024-07-12 19:10:14.379961] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:08.286 [2024-07-12 19:10:14.381600] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.548 passed 00:14:08.548 Test: admin_identify_ctrlr_verify_fused ...[2024-07-12 19:10:14.475205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.548 [2024-07-12 19:10:14.478220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.548 passed 00:14:08.548 Test: admin_identify_ns ...[2024-07-12 19:10:14.575395] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.548 [2024-07-12 19:10:14.635135] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:08.548 [2024-07-12 19:10:14.643135] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:08.548 [2024-07-12 19:10:14.664244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.809 passed 00:14:08.809 Test: admin_get_features_mandatory_features ...[2024-07-12 19:10:14.755870] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.809 [2024-07-12 19:10:14.758887] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.809 passed 00:14:08.809 Test: admin_get_features_optional_features ...[2024-07-12 19:10:14.853410] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:08.809 [2024-07-12 19:10:14.856426] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:08.809 passed 00:14:09.070 Test: admin_set_features_number_of_queues ...[2024-07-12 19:10:14.950374] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.070 [2024-07-12 19:10:15.055232] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.070 passed 00:14:09.070 Test: admin_get_log_page_mandatory_logs ...[2024-07-12 19:10:15.149271] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.070 [2024-07-12 19:10:15.152292] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.070 passed 00:14:09.331 Test: admin_get_log_page_with_lpo ...[2024-07-12 19:10:15.245400] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.331 [2024-07-12 19:10:15.313133] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:09.331 [2024-07-12 19:10:15.326170] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.331 passed 00:14:09.331 Test: fabric_property_get ...[2024-07-12 19:10:15.418240] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.331 [2024-07-12 19:10:15.419488] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:09.331 [2024-07-12 19:10:15.421256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.331 passed 00:14:09.592 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-12 19:10:15.516817] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.592 [2024-07-12 19:10:15.518077] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:09.592 [2024-07-12 19:10:15.519844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.592 passed 00:14:09.592 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-12 19:10:15.611370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.592 [2024-07-12 19:10:15.695131] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:09.592 [2024-07-12 19:10:15.711130] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:09.592 [2024-07-12 19:10:15.716208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.852 passed 00:14:09.852 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-12 19:10:15.810183] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.852 [2024-07-12 19:10:15.811429] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:09.852 [2024-07-12 19:10:15.813203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:09.852 passed 00:14:09.852 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-12 19:10:15.906370] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:09.852 [2024-07-12 19:10:15.982130] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:10.112 [2024-07-12 19:10:16.006127] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:10.112 [2024-07-12 19:10:16.011203] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.112 passed 00:14:10.112 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-12 19:10:16.105223] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.112 [2024-07-12 19:10:16.106466] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:10.112 [2024-07-12 19:10:16.106486] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:10.112 [2024-07-12 19:10:16.108234] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.112 passed 00:14:10.112 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-12 19:10:16.201354] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.373 [2024-07-12 19:10:16.293132] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:10.373 [2024-07-12 19:10:16.301133] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:10.373 [2024-07-12 19:10:16.309127] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:10.373 [2024-07-12 19:10:16.317129] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:10.373 [2024-07-12 19:10:16.346208] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.373 passed 00:14:10.373 Test: admin_create_io_sq_verify_pc ...[2024-07-12 19:10:16.440187] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:10.373 [2024-07-12 19:10:16.456240] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:10.373 [2024-07-12 19:10:16.473342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:10.633 passed 00:14:10.633 Test: admin_create_io_qp_max_qps ...[2024-07-12 19:10:16.567859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:11.575 [2024-07-12 19:10:17.680132] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:14:12.146 [2024-07-12 19:10:18.067580] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.146 passed 00:14:12.146 Test: admin_create_io_sq_shared_cq ...[2024-07-12 19:10:18.160675] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:12.407 [2024-07-12 19:10:18.292127] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:12.407 [2024-07-12 19:10:18.329189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:12.407 passed 00:14:12.407 00:14:12.407 Run Summary: Type Total Ran Passed Failed Inactive 00:14:12.407 suites 1 1 n/a 0 0 00:14:12.407 tests 18 18 18 0 0 00:14:12.407 asserts 360 360 360 0 n/a 00:14:12.407 00:14:12.407 Elapsed time = 1.658 seconds 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1345910 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1345910 ']' 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1345910 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1345910 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1345910' 00:14:12.407 killing process with pid 1345910 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1345910 00:14:12.407 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1345910 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:12.668 00:14:12.668 real 0m6.408s 00:14:12.668 user 0m18.394s 00:14:12.668 sys 0m0.441s 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:12.668 ************************************ 00:14:12.668 END TEST nvmf_vfio_user_nvme_compliance 00:14:12.668 ************************************ 00:14:12.668 19:10:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:12.668 19:10:18 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:12.668 19:10:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:12.668 19:10:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:12.668 19:10:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:12.668 ************************************ 00:14:12.668 START TEST nvmf_vfio_user_fuzz 00:14:12.668 ************************************ 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:12.668 * Looking for test storage... 00:14:12.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:12.668 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1347247 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1347247' 00:14:12.669 Process pid: 1347247 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:12.669 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1347247 00:14:12.929 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1347247 ']' 00:14:12.929 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.929 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:12.929 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.929 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:12.929 19:10:18 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:13.515 19:10:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:13.515 19:10:19 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:14:13.515 19:10:19 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:14.503 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:14.503 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.503 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.503 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.503 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:14.503 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:14.503 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.503 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.764 malloc0 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:14.764 19:10:20 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:46.877 Fuzzing completed. Shutting down the fuzz application 00:14:46.877 00:14:46.877 Dumping successful admin opcodes: 00:14:46.877 8, 9, 10, 24, 00:14:46.877 Dumping successful io opcodes: 00:14:46.877 0, 00:14:46.877 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1155102, total successful commands: 4542, random_seed: 1263852992 00:14:46.877 NS: 0x200003a1ef00 admin qp, Total commands completed: 145364, total successful commands: 1179, random_seed: 3386532032 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1347247 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1347247 ']' 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1347247 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1347247 00:14:46.877 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:46.878 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:46.878 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1347247' 00:14:46.878 killing process with pid 1347247 00:14:46.878 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1347247 00:14:46.878 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1347247 00:14:46.878 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:46.878 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:46.878 00:14:46.878 real 0m33.680s 00:14:46.878 user 0m38.343s 00:14:46.878 sys 0m25.702s 00:14:46.878 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:46.878 19:10:52 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:46.878 ************************************ 00:14:46.878 END TEST nvmf_vfio_user_fuzz 00:14:46.878 ************************************ 00:14:46.878 19:10:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:46.878 19:10:52 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:46.878 19:10:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:46.878 19:10:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:46.878 19:10:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:46.878 ************************************ 00:14:46.878 START TEST nvmf_host_management 00:14:46.878 ************************************ 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:46.878 * Looking for test storage... 00:14:46.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.878 19:10:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:46.879 19:10:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:46.879 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:46.879 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:46.879 19:10:52 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:46.879 19:10:52 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:53.484 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:14:53.485 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:14:53.485 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:14:53.485 Found net devices under 0000:4b:00.0: cvl_0_0 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:14:53.485 Found net devices under 0000:4b:00.1: cvl_0_1 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:53.485 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:53.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:53.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:14:53.745 00:14:53.745 --- 10.0.0.2 ping statistics --- 00:14:53.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.745 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:53.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:53.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:14:53.745 00:14:53.745 --- 10.0.0.1 ping statistics --- 00:14:53.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:53.745 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1357317 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1357317 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1357317 ']' 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:53.745 19:10:59 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:53.745 [2024-07-12 19:10:59.767254] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:14:53.745 [2024-07-12 19:10:59.767339] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.745 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.745 [2024-07-12 19:10:59.856766] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:54.005 [2024-07-12 19:10:59.955106] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.005 [2024-07-12 19:10:59.955168] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.005 [2024-07-12 19:10:59.955177] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:54.005 [2024-07-12 19:10:59.955184] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:54.005 [2024-07-12 19:10:59.955191] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.005 [2024-07-12 19:10:59.955365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:54.005 [2024-07-12 19:10:59.955534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:54.005 [2024-07-12 19:10:59.955701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:54.005 [2024-07-12 19:10:59.955701] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 [2024-07-12 19:11:00.588571] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 Malloc0 00:14:54.573 [2024-07-12 19:11:00.647704] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1357662 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1357662 /var/tmp/bdevperf.sock 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1357662 ']' 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:54.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:54.573 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:54.573 { 00:14:54.573 "params": { 00:14:54.573 "name": "Nvme$subsystem", 00:14:54.573 "trtype": "$TEST_TRANSPORT", 00:14:54.573 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:54.573 "adrfam": "ipv4", 00:14:54.573 "trsvcid": "$NVMF_PORT", 00:14:54.574 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:54.574 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:54.574 "hdgst": ${hdgst:-false}, 00:14:54.574 "ddgst": ${ddgst:-false} 00:14:54.574 }, 00:14:54.574 "method": "bdev_nvme_attach_controller" 00:14:54.574 } 00:14:54.574 EOF 00:14:54.574 )") 00:14:54.834 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:54.834 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:54.834 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:54.834 19:11:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:54.834 "params": { 00:14:54.834 "name": "Nvme0", 00:14:54.834 "trtype": "tcp", 00:14:54.834 "traddr": "10.0.0.2", 00:14:54.834 "adrfam": "ipv4", 00:14:54.834 "trsvcid": "4420", 00:14:54.834 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:54.834 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:54.834 "hdgst": false, 00:14:54.834 "ddgst": false 00:14:54.834 }, 00:14:54.834 "method": "bdev_nvme_attach_controller" 00:14:54.834 }' 00:14:54.834 [2024-07-12 19:11:00.746721] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:14:54.834 [2024-07-12 19:11:00.746773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1357662 ] 00:14:54.834 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.834 [2024-07-12 19:11:00.805686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.834 [2024-07-12 19:11:00.870571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.095 Running I/O for 10 seconds... 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=399 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 399 -ge 100 ']' 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.669 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:55.669 [2024-07-12 19:11:01.590765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590838] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590883] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590915] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.590998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591055] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591157] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591170] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591195] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.669 [2024-07-12 19:11:01.591208] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.670 [2024-07-12 19:11:01.591214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf551f0 is same with the state(5) to be set 00:14:55.670 [2024-07-12 19:11:01.591895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.591931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.591951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.591960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.591971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.591979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.591989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.591997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.670 [2024-07-12 19:11:01.592714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.670 [2024-07-12 19:11:01.592722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.592989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.592999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.593008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.593018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.593026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.593036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.593044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.593054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.593062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.593073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.593081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.593091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.593099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.593110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.593118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.593133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:55.671 [2024-07-12 19:11:01.593142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.671 [2024-07-12 19:11:01.593151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x97d4f0 is same with the state(5) to be set 00:14:55.671 [2024-07-12 19:11:01.593197] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x97d4f0 was disconnected and freed. reset controller. 00:14:55.671 [2024-07-12 19:11:01.594434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:55.671 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.671 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:55.671 task offset: 57344 on job bdev=Nvme0n1 fails 00:14:55.671 00:14:55.671 Latency(us) 00:14:55.671 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.671 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:55.671 Job: Nvme0n1 ended in about 0.42 seconds with error 00:14:55.671 Verification LBA range: start 0x0 length 0x400 00:14:55.671 Nvme0n1 : 0.42 1067.13 66.70 152.45 0.00 50978.77 9065.81 44782.93 00:14:55.671 =================================================================================================================== 00:14:55.671 Total : 1067.13 66.70 152.45 0.00 50978.77 9065.81 44782.93 00:14:55.671 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:55.671 [2024-07-12 19:11:01.596455] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:55.671 [2024-07-12 19:11:01.596481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x56c3b0 (9): Bad file descriptor 00:14:55.671 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:55.671 19:11:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:55.671 19:11:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:55.671 [2024-07-12 19:11:01.646762] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1357662 00:14:56.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1357662) - No such process 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:56.611 { 00:14:56.611 "params": { 00:14:56.611 "name": "Nvme$subsystem", 00:14:56.611 "trtype": "$TEST_TRANSPORT", 00:14:56.611 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:56.611 "adrfam": "ipv4", 00:14:56.611 "trsvcid": "$NVMF_PORT", 00:14:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:56.611 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:56.611 "hdgst": ${hdgst:-false}, 00:14:56.611 "ddgst": ${ddgst:-false} 00:14:56.611 }, 00:14:56.611 "method": "bdev_nvme_attach_controller" 00:14:56.611 } 00:14:56.611 EOF 00:14:56.611 )") 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:56.611 19:11:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:56.611 "params": { 00:14:56.611 "name": "Nvme0", 00:14:56.611 "trtype": "tcp", 00:14:56.611 "traddr": "10.0.0.2", 00:14:56.611 "adrfam": "ipv4", 00:14:56.611 "trsvcid": "4420", 00:14:56.611 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:56.611 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:56.611 "hdgst": false, 00:14:56.611 "ddgst": false 00:14:56.611 }, 00:14:56.611 "method": "bdev_nvme_attach_controller" 00:14:56.611 }' 00:14:56.611 [2024-07-12 19:11:02.662549] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:14:56.611 [2024-07-12 19:11:02.662604] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1358016 ] 00:14:56.611 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.611 [2024-07-12 19:11:02.721286] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.871 [2024-07-12 19:11:02.785071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.871 Running I/O for 1 seconds... 00:14:57.813 00:14:57.813 Latency(us) 00:14:57.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:57.813 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:57.813 Verification LBA range: start 0x0 length 0x400 00:14:57.813 Nvme0n1 : 1.01 1207.01 75.44 0.00 0.00 52210.62 3112.96 44346.03 00:14:57.813 =================================================================================================================== 00:14:57.813 Total : 1207.01 75.44 0.00 0.00 52210.62 3112.96 44346.03 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:58.074 rmmod nvme_tcp 00:14:58.074 rmmod nvme_fabrics 00:14:58.074 rmmod nvme_keyring 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1357317 ']' 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1357317 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1357317 ']' 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1357317 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:58.074 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1357317 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1357317' 00:14:58.336 killing process with pid 1357317 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1357317 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1357317 00:14:58.336 [2024-07-12 19:11:04.336814] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:58.336 19:11:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.894 19:11:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:00.894 19:11:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:00.894 00:15:00.894 real 0m14.017s 00:15:00.894 user 0m22.156s 00:15:00.894 sys 0m6.172s 00:15:00.894 19:11:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:00.894 19:11:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:15:00.894 ************************************ 00:15:00.894 END TEST nvmf_host_management 00:15:00.894 ************************************ 00:15:00.894 19:11:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:00.894 19:11:06 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:00.894 19:11:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:00.894 19:11:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:00.894 19:11:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:00.894 ************************************ 00:15:00.894 START TEST nvmf_lvol 00:15:00.894 ************************************ 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:00.894 * Looking for test storage... 00:15:00.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:15:00.894 19:11:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:07.483 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:07.484 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:07.484 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:07.484 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:07.484 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:07.484 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:07.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:07.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:15:07.745 00:15:07.745 --- 10.0.0.2 ping statistics --- 00:15:07.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.745 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:07.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:07.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:15:07.745 00:15:07.745 --- 10.0.0.1 ping statistics --- 00:15:07.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:07.745 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1362427 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1362427 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1362427 ']' 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:07.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:07.745 19:11:13 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:08.049 [2024-07-12 19:11:13.877429] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:15:08.049 [2024-07-12 19:11:13.877493] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:08.049 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.049 [2024-07-12 19:11:13.950621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:08.049 [2024-07-12 19:11:14.025225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:08.049 [2024-07-12 19:11:14.025263] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:08.049 [2024-07-12 19:11:14.025272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:08.049 [2024-07-12 19:11:14.025279] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:08.049 [2024-07-12 19:11:14.025285] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:08.049 [2024-07-12 19:11:14.025424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:08.049 [2024-07-12 19:11:14.025540] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:08.049 [2024-07-12 19:11:14.025544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:08.621 19:11:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:08.621 19:11:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:15:08.621 19:11:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:08.621 19:11:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:08.621 19:11:14 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:08.621 19:11:14 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:08.621 19:11:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:08.880 [2024-07-12 19:11:14.822286] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:08.880 19:11:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:09.140 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:09.140 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:09.140 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:09.140 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:09.400 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:09.661 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b04566e9-60fc-40bc-946f-003756b14b27 00:15:09.661 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b04566e9-60fc-40bc-946f-003756b14b27 lvol 20 00:15:09.661 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=89b88590-59f7-4dd1-bae0-d4ee761b542e 00:15:09.661 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:09.921 19:11:15 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 89b88590-59f7-4dd1-bae0-d4ee761b542e 00:15:09.921 19:11:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:10.181 [2024-07-12 19:11:16.190696] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.181 19:11:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.441 19:11:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1363057 00:15:10.441 19:11:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:10.441 19:11:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:10.441 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.381 19:11:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 89b88590-59f7-4dd1-bae0-d4ee761b542e MY_SNAPSHOT 00:15:11.642 19:11:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=e079b90b-a1b8-495a-8090-18d36ed73259 00:15:11.642 19:11:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 89b88590-59f7-4dd1-bae0-d4ee761b542e 30 00:15:11.642 19:11:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone e079b90b-a1b8-495a-8090-18d36ed73259 MY_CLONE 00:15:11.902 19:11:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=2f4ea679-a573-484d-a7b0-485aecad4713 00:15:11.902 19:11:17 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 2f4ea679-a573-484d-a7b0-485aecad4713 00:15:12.472 19:11:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1363057 00:15:20.657 Initializing NVMe Controllers 00:15:20.657 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:20.657 Controller IO queue size 128, less than required. 00:15:20.657 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:20.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:20.657 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:20.657 Initialization complete. Launching workers. 00:15:20.657 ======================================================== 00:15:20.657 Latency(us) 00:15:20.657 Device Information : IOPS MiB/s Average min max 00:15:20.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12384.40 48.38 10338.00 1559.77 61394.88 00:15:20.657 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17839.10 69.68 7175.36 1343.77 55279.46 00:15:20.657 ======================================================== 00:15:20.657 Total : 30223.50 118.06 8471.29 1343.77 61394.88 00:15:20.657 00:15:20.657 19:11:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:20.918 19:11:26 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 89b88590-59f7-4dd1-bae0-d4ee761b542e 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b04566e9-60fc-40bc-946f-003756b14b27 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:21.179 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:21.179 rmmod nvme_tcp 00:15:21.439 rmmod nvme_fabrics 00:15:21.439 rmmod nvme_keyring 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1362427 ']' 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1362427 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1362427 ']' 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1362427 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1362427 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1362427' 00:15:21.439 killing process with pid 1362427 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1362427 00:15:21.439 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1362427 00:15:21.700 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:21.700 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:21.700 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:21.700 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:21.700 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:21.700 19:11:27 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:21.700 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:21.700 19:11:27 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.614 19:11:29 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:23.614 00:15:23.614 real 0m23.159s 00:15:23.614 user 1m3.851s 00:15:23.614 sys 0m7.617s 00:15:23.614 19:11:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:23.614 19:11:29 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:23.614 ************************************ 00:15:23.614 END TEST nvmf_lvol 00:15:23.614 ************************************ 00:15:23.614 19:11:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:23.614 19:11:29 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:23.614 19:11:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:23.614 19:11:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.614 19:11:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:23.614 ************************************ 00:15:23.614 START TEST nvmf_lvs_grow 00:15:23.614 ************************************ 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:23.876 * Looking for test storage... 00:15:23.876 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:23.876 19:11:29 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:15:32.023 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:15:32.023 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.023 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:15:32.024 Found net devices under 0000:4b:00.0: cvl_0_0 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:15:32.024 Found net devices under 0000:4b:00.1: cvl_0_1 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:32.024 19:11:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:32.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:32.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.539 ms 00:15:32.024 00:15:32.024 --- 10.0.0.2 ping statistics --- 00:15:32.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.024 rtt min/avg/max/mdev = 0.539/0.539/0.539/0.000 ms 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:32.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:32.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:15:32.024 00:15:32.024 --- 10.0.0.1 ping statistics --- 00:15:32.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:32.024 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1369391 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1369391 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1369391 ']' 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:32.024 [2024-07-12 19:11:37.140101] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:15:32.024 [2024-07-12 19:11:37.140172] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:32.024 EAL: No free 2048 kB hugepages reported on node 1 00:15:32.024 [2024-07-12 19:11:37.210042] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:32.024 [2024-07-12 19:11:37.283097] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:32.024 [2024-07-12 19:11:37.283138] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:32.024 [2024-07-12 19:11:37.283147] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:32.024 [2024-07-12 19:11:37.283153] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:32.024 [2024-07-12 19:11:37.283159] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:32.024 [2024-07-12 19:11:37.283189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:32.024 19:11:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:32.024 [2024-07-12 19:11:38.090133] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:32.024 ************************************ 00:15:32.024 START TEST lvs_grow_clean 00:15:32.024 ************************************ 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:32.024 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:32.285 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:32.285 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:32.285 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:32.546 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:32.546 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:32.546 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:32.546 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:32.546 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:32.546 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 lvol 150 00:15:32.806 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=51ca5ef8-953e-439c-88ee-a204f235fc5e 00:15:32.806 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:32.806 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:33.067 [2024-07-12 19:11:38.960147] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:33.067 [2024-07-12 19:11:38.960201] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:33.067 true 00:15:33.067 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:33.067 19:11:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:33.067 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:33.067 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:33.328 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 51ca5ef8-953e-439c-88ee-a204f235fc5e 00:15:33.328 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:33.588 [2024-07-12 19:11:39.561994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:33.588 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1369808 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1369808 /var/tmp/bdevperf.sock 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1369808 ']' 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:33.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:33.849 19:11:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:33.849 [2024-07-12 19:11:39.775720] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:15:33.849 [2024-07-12 19:11:39.775772] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1369808 ] 00:15:33.849 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.849 [2024-07-12 19:11:39.849986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.849 [2024-07-12 19:11:39.914130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:34.420 19:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.420 19:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:34.420 19:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:34.680 Nvme0n1 00:15:34.680 19:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:34.940 [ 00:15:34.940 { 00:15:34.940 "name": "Nvme0n1", 00:15:34.940 "aliases": [ 00:15:34.940 "51ca5ef8-953e-439c-88ee-a204f235fc5e" 00:15:34.940 ], 00:15:34.940 "product_name": "NVMe disk", 00:15:34.940 "block_size": 4096, 00:15:34.940 "num_blocks": 38912, 00:15:34.940 "uuid": "51ca5ef8-953e-439c-88ee-a204f235fc5e", 00:15:34.940 "assigned_rate_limits": { 00:15:34.940 "rw_ios_per_sec": 0, 00:15:34.940 "rw_mbytes_per_sec": 0, 00:15:34.940 "r_mbytes_per_sec": 0, 00:15:34.940 "w_mbytes_per_sec": 0 00:15:34.940 }, 00:15:34.940 "claimed": false, 00:15:34.940 "zoned": false, 00:15:34.940 "supported_io_types": { 00:15:34.940 "read": true, 00:15:34.940 "write": true, 00:15:34.940 "unmap": true, 00:15:34.940 "flush": true, 00:15:34.940 "reset": true, 00:15:34.940 "nvme_admin": true, 00:15:34.940 "nvme_io": true, 00:15:34.940 "nvme_io_md": false, 00:15:34.940 "write_zeroes": true, 00:15:34.940 "zcopy": false, 00:15:34.940 "get_zone_info": false, 00:15:34.940 "zone_management": false, 00:15:34.940 "zone_append": false, 00:15:34.940 "compare": true, 00:15:34.940 "compare_and_write": true, 00:15:34.940 "abort": true, 00:15:34.940 "seek_hole": false, 00:15:34.940 "seek_data": false, 00:15:34.940 "copy": true, 00:15:34.940 "nvme_iov_md": false 00:15:34.940 }, 00:15:34.940 "memory_domains": [ 00:15:34.940 { 00:15:34.940 "dma_device_id": "system", 00:15:34.940 "dma_device_type": 1 00:15:34.940 } 00:15:34.940 ], 00:15:34.940 "driver_specific": { 00:15:34.940 "nvme": [ 00:15:34.940 { 00:15:34.940 "trid": { 00:15:34.940 "trtype": "TCP", 00:15:34.940 "adrfam": "IPv4", 00:15:34.940 "traddr": "10.0.0.2", 00:15:34.940 "trsvcid": "4420", 00:15:34.940 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:34.940 }, 00:15:34.940 "ctrlr_data": { 00:15:34.940 "cntlid": 1, 00:15:34.940 "vendor_id": "0x8086", 00:15:34.940 "model_number": "SPDK bdev Controller", 00:15:34.940 "serial_number": "SPDK0", 00:15:34.940 "firmware_revision": "24.09", 00:15:34.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:34.940 "oacs": { 00:15:34.940 "security": 0, 00:15:34.940 "format": 0, 00:15:34.940 "firmware": 0, 00:15:34.940 "ns_manage": 0 00:15:34.940 }, 00:15:34.940 "multi_ctrlr": true, 00:15:34.940 "ana_reporting": false 00:15:34.940 }, 00:15:34.940 "vs": { 00:15:34.940 "nvme_version": "1.3" 00:15:34.940 }, 00:15:34.940 "ns_data": { 00:15:34.940 "id": 1, 00:15:34.940 "can_share": true 00:15:34.940 } 00:15:34.940 } 00:15:34.940 ], 00:15:34.940 "mp_policy": "active_passive" 00:15:34.940 } 00:15:34.940 } 00:15:34.940 ] 00:15:34.940 19:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1370122 00:15:34.940 19:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:34.940 19:11:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:34.940 Running I/O for 10 seconds... 00:15:36.323 Latency(us) 00:15:36.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.323 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:36.323 Nvme0n1 : 1.00 18186.00 71.04 0.00 0.00 0.00 0.00 0.00 00:15:36.323 =================================================================================================================== 00:15:36.324 Total : 18186.00 71.04 0.00 0.00 0.00 0.00 0.00 00:15:36.324 00:15:36.897 19:11:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:37.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.157 Nvme0n1 : 2.00 18281.50 71.41 0.00 0.00 0.00 0.00 0.00 00:15:37.157 =================================================================================================================== 00:15:37.157 Total : 18281.50 71.41 0.00 0.00 0.00 0.00 0.00 00:15:37.157 00:15:37.157 true 00:15:37.157 19:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:37.157 19:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:37.417 19:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:37.417 19:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:37.417 19:11:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1370122 00:15:37.987 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.987 Nvme0n1 : 3.00 18294.33 71.46 0.00 0.00 0.00 0.00 0.00 00:15:37.987 =================================================================================================================== 00:15:37.987 Total : 18294.33 71.46 0.00 0.00 0.00 0.00 0.00 00:15:37.987 00:15:38.927 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.927 Nvme0n1 : 4.00 18308.75 71.52 0.00 0.00 0.00 0.00 0.00 00:15:38.927 =================================================================================================================== 00:15:38.927 Total : 18308.75 71.52 0.00 0.00 0.00 0.00 0.00 00:15:38.927 00:15:40.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.312 Nvme0n1 : 5.00 18333.20 71.61 0.00 0.00 0.00 0.00 0.00 00:15:40.312 =================================================================================================================== 00:15:40.312 Total : 18333.20 71.61 0.00 0.00 0.00 0.00 0.00 00:15:40.312 00:15:41.254 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:41.254 Nvme0n1 : 6.00 18339.00 71.64 0.00 0.00 0.00 0.00 0.00 00:15:41.254 =================================================================================================================== 00:15:41.254 Total : 18339.00 71.64 0.00 0.00 0.00 0.00 0.00 00:15:41.254 00:15:42.197 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:42.197 Nvme0n1 : 7.00 18352.29 71.69 0.00 0.00 0.00 0.00 0.00 00:15:42.197 =================================================================================================================== 00:15:42.197 Total : 18352.29 71.69 0.00 0.00 0.00 0.00 0.00 00:15:42.197 00:15:43.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:43.138 Nvme0n1 : 8.00 18362.25 71.73 0.00 0.00 0.00 0.00 0.00 00:15:43.138 =================================================================================================================== 00:15:43.138 Total : 18362.25 71.73 0.00 0.00 0.00 0.00 0.00 00:15:43.138 00:15:44.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:44.083 Nvme0n1 : 9.00 18371.89 71.77 0.00 0.00 0.00 0.00 0.00 00:15:44.083 =================================================================================================================== 00:15:44.083 Total : 18371.89 71.77 0.00 0.00 0.00 0.00 0.00 00:15:44.083 00:15:45.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.026 Nvme0n1 : 10.00 18382.70 71.81 0.00 0.00 0.00 0.00 0.00 00:15:45.026 =================================================================================================================== 00:15:45.026 Total : 18382.70 71.81 0.00 0.00 0.00 0.00 0.00 00:15:45.026 00:15:45.026 00:15:45.026 Latency(us) 00:15:45.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:45.026 Nvme0n1 : 10.00 18380.44 71.80 0.00 0.00 6960.40 2949.12 11141.12 00:15:45.026 =================================================================================================================== 00:15:45.026 Total : 18380.44 71.80 0.00 0.00 6960.40 2949.12 11141.12 00:15:45.026 0 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1369808 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1369808 ']' 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1369808 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1369808 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1369808' 00:15:45.026 killing process with pid 1369808 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1369808 00:15:45.026 Received shutdown signal, test time was about 10.000000 seconds 00:15:45.026 00:15:45.026 Latency(us) 00:15:45.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.026 =================================================================================================================== 00:15:45.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:45.026 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1369808 00:15:45.287 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:45.548 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:45.548 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:45.548 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:45.808 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:45.808 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:45.808 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:45.808 [2024-07-12 19:11:51.897607] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:46.069 19:11:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:46.069 request: 00:15:46.069 { 00:15:46.069 "uuid": "26415940-32b0-4ba2-8ab3-d03631e07cf9", 00:15:46.069 "method": "bdev_lvol_get_lvstores", 00:15:46.069 "req_id": 1 00:15:46.069 } 00:15:46.069 Got JSON-RPC error response 00:15:46.069 response: 00:15:46.069 { 00:15:46.069 "code": -19, 00:15:46.069 "message": "No such device" 00:15:46.069 } 00:15:46.069 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:46.069 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:46.069 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:46.069 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:46.069 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:46.330 aio_bdev 00:15:46.330 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 51ca5ef8-953e-439c-88ee-a204f235fc5e 00:15:46.330 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=51ca5ef8-953e-439c-88ee-a204f235fc5e 00:15:46.330 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:46.330 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:46.330 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:46.330 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:46.330 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:46.330 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 51ca5ef8-953e-439c-88ee-a204f235fc5e -t 2000 00:15:46.591 [ 00:15:46.591 { 00:15:46.591 "name": "51ca5ef8-953e-439c-88ee-a204f235fc5e", 00:15:46.591 "aliases": [ 00:15:46.591 "lvs/lvol" 00:15:46.591 ], 00:15:46.591 "product_name": "Logical Volume", 00:15:46.591 "block_size": 4096, 00:15:46.591 "num_blocks": 38912, 00:15:46.591 "uuid": "51ca5ef8-953e-439c-88ee-a204f235fc5e", 00:15:46.591 "assigned_rate_limits": { 00:15:46.591 "rw_ios_per_sec": 0, 00:15:46.591 "rw_mbytes_per_sec": 0, 00:15:46.591 "r_mbytes_per_sec": 0, 00:15:46.591 "w_mbytes_per_sec": 0 00:15:46.591 }, 00:15:46.591 "claimed": false, 00:15:46.591 "zoned": false, 00:15:46.591 "supported_io_types": { 00:15:46.591 "read": true, 00:15:46.591 "write": true, 00:15:46.591 "unmap": true, 00:15:46.591 "flush": false, 00:15:46.591 "reset": true, 00:15:46.591 "nvme_admin": false, 00:15:46.591 "nvme_io": false, 00:15:46.591 "nvme_io_md": false, 00:15:46.591 "write_zeroes": true, 00:15:46.591 "zcopy": false, 00:15:46.591 "get_zone_info": false, 00:15:46.591 "zone_management": false, 00:15:46.591 "zone_append": false, 00:15:46.591 "compare": false, 00:15:46.591 "compare_and_write": false, 00:15:46.591 "abort": false, 00:15:46.591 "seek_hole": true, 00:15:46.591 "seek_data": true, 00:15:46.591 "copy": false, 00:15:46.591 "nvme_iov_md": false 00:15:46.591 }, 00:15:46.591 "driver_specific": { 00:15:46.591 "lvol": { 00:15:46.591 "lvol_store_uuid": "26415940-32b0-4ba2-8ab3-d03631e07cf9", 00:15:46.591 "base_bdev": "aio_bdev", 00:15:46.591 "thin_provision": false, 00:15:46.591 "num_allocated_clusters": 38, 00:15:46.591 "snapshot": false, 00:15:46.591 "clone": false, 00:15:46.591 "esnap_clone": false 00:15:46.591 } 00:15:46.591 } 00:15:46.591 } 00:15:46.591 ] 00:15:46.591 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:46.591 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:46.591 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:46.591 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:46.591 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:46.591 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:46.851 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:46.851 19:11:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 51ca5ef8-953e-439c-88ee-a204f235fc5e 00:15:47.112 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 26415940-32b0-4ba2-8ab3-d03631e07cf9 00:15:47.112 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:47.372 00:15:47.372 real 0m15.217s 00:15:47.372 user 0m14.981s 00:15:47.372 sys 0m1.235s 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:47.372 ************************************ 00:15:47.372 END TEST lvs_grow_clean 00:15:47.372 ************************************ 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:47.372 ************************************ 00:15:47.372 START TEST lvs_grow_dirty 00:15:47.372 ************************************ 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:47.372 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:47.632 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:47.632 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:47.892 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:15:47.892 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:15:47.892 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:47.892 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:47.892 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:47.892 19:11:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 lvol 150 00:15:48.152 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c983ceda-2bd8-49c1-9c8e-9182d466847f 00:15:48.152 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.152 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:48.152 [2024-07-12 19:11:54.264128] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:48.152 [2024-07-12 19:11:54.264178] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:48.152 true 00:15:48.152 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:15:48.152 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:48.412 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:48.412 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:48.672 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c983ceda-2bd8-49c1-9c8e-9182d466847f 00:15:48.672 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:48.932 [2024-07-12 19:11:54.873993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:48.932 19:11:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:48.932 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1372884 00:15:48.932 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:48.932 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:48.932 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1372884 /var/tmp/bdevperf.sock 00:15:48.932 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1372884 ']' 00:15:48.932 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:48.932 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.932 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:48.932 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:48.933 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.933 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:49.193 [2024-07-12 19:11:55.076742] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:15:49.193 [2024-07-12 19:11:55.076791] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372884 ] 00:15:49.193 EAL: No free 2048 kB hugepages reported on node 1 00:15:49.193 [2024-07-12 19:11:55.150120] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.193 [2024-07-12 19:11:55.203726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:49.765 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.765 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:49.765 19:11:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:50.026 Nvme0n1 00:15:50.026 19:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:50.286 [ 00:15:50.286 { 00:15:50.286 "name": "Nvme0n1", 00:15:50.286 "aliases": [ 00:15:50.286 "c983ceda-2bd8-49c1-9c8e-9182d466847f" 00:15:50.286 ], 00:15:50.286 "product_name": "NVMe disk", 00:15:50.286 "block_size": 4096, 00:15:50.286 "num_blocks": 38912, 00:15:50.286 "uuid": "c983ceda-2bd8-49c1-9c8e-9182d466847f", 00:15:50.286 "assigned_rate_limits": { 00:15:50.286 "rw_ios_per_sec": 0, 00:15:50.286 "rw_mbytes_per_sec": 0, 00:15:50.286 "r_mbytes_per_sec": 0, 00:15:50.286 "w_mbytes_per_sec": 0 00:15:50.286 }, 00:15:50.286 "claimed": false, 00:15:50.286 "zoned": false, 00:15:50.286 "supported_io_types": { 00:15:50.286 "read": true, 00:15:50.286 "write": true, 00:15:50.286 "unmap": true, 00:15:50.286 "flush": true, 00:15:50.286 "reset": true, 00:15:50.286 "nvme_admin": true, 00:15:50.286 "nvme_io": true, 00:15:50.286 "nvme_io_md": false, 00:15:50.286 "write_zeroes": true, 00:15:50.286 "zcopy": false, 00:15:50.286 "get_zone_info": false, 00:15:50.286 "zone_management": false, 00:15:50.286 "zone_append": false, 00:15:50.286 "compare": true, 00:15:50.286 "compare_and_write": true, 00:15:50.286 "abort": true, 00:15:50.286 "seek_hole": false, 00:15:50.286 "seek_data": false, 00:15:50.286 "copy": true, 00:15:50.286 "nvme_iov_md": false 00:15:50.286 }, 00:15:50.286 "memory_domains": [ 00:15:50.286 { 00:15:50.286 "dma_device_id": "system", 00:15:50.286 "dma_device_type": 1 00:15:50.286 } 00:15:50.286 ], 00:15:50.286 "driver_specific": { 00:15:50.286 "nvme": [ 00:15:50.286 { 00:15:50.286 "trid": { 00:15:50.286 "trtype": "TCP", 00:15:50.286 "adrfam": "IPv4", 00:15:50.286 "traddr": "10.0.0.2", 00:15:50.286 "trsvcid": "4420", 00:15:50.286 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:50.286 }, 00:15:50.286 "ctrlr_data": { 00:15:50.286 "cntlid": 1, 00:15:50.286 "vendor_id": "0x8086", 00:15:50.286 "model_number": "SPDK bdev Controller", 00:15:50.286 "serial_number": "SPDK0", 00:15:50.286 "firmware_revision": "24.09", 00:15:50.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:50.286 "oacs": { 00:15:50.286 "security": 0, 00:15:50.286 "format": 0, 00:15:50.286 "firmware": 0, 00:15:50.286 "ns_manage": 0 00:15:50.286 }, 00:15:50.286 "multi_ctrlr": true, 00:15:50.286 "ana_reporting": false 00:15:50.286 }, 00:15:50.286 "vs": { 00:15:50.286 "nvme_version": "1.3" 00:15:50.286 }, 00:15:50.286 "ns_data": { 00:15:50.286 "id": 1, 00:15:50.286 "can_share": true 00:15:50.286 } 00:15:50.286 } 00:15:50.286 ], 00:15:50.286 "mp_policy": "active_passive" 00:15:50.286 } 00:15:50.286 } 00:15:50.286 ] 00:15:50.286 19:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1373199 00:15:50.286 19:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:50.287 19:11:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:50.287 Running I/O for 10 seconds... 00:15:51.258 Latency(us) 00:15:51.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:51.258 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:51.258 Nvme0n1 : 1.00 18051.00 70.51 0.00 0.00 0.00 0.00 0.00 00:15:51.258 =================================================================================================================== 00:15:51.258 Total : 18051.00 70.51 0.00 0.00 0.00 0.00 0.00 00:15:51.258 00:15:52.200 19:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:15:52.460 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.460 Nvme0n1 : 2.00 18177.50 71.01 0.00 0.00 0.00 0.00 0.00 00:15:52.460 =================================================================================================================== 00:15:52.460 Total : 18177.50 71.01 0.00 0.00 0.00 0.00 0.00 00:15:52.460 00:15:52.460 true 00:15:52.460 19:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:15:52.460 19:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:52.719 19:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:52.720 19:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:52.720 19:11:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1373199 00:15:53.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.288 Nvme0n1 : 3.00 18241.00 71.25 0.00 0.00 0.00 0.00 0.00 00:15:53.288 =================================================================================================================== 00:15:53.288 Total : 18241.00 71.25 0.00 0.00 0.00 0.00 0.00 00:15:53.288 00:15:54.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:54.670 Nvme0n1 : 4.00 18258.50 71.32 0.00 0.00 0.00 0.00 0.00 00:15:54.670 =================================================================================================================== 00:15:54.670 Total : 18258.50 71.32 0.00 0.00 0.00 0.00 0.00 00:15:54.670 00:15:55.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.611 Nvme0n1 : 5.00 18293.20 71.46 0.00 0.00 0.00 0.00 0.00 00:15:55.611 =================================================================================================================== 00:15:55.611 Total : 18293.20 71.46 0.00 0.00 0.00 0.00 0.00 00:15:55.611 00:15:56.552 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:56.552 Nvme0n1 : 6.00 18315.17 71.54 0.00 0.00 0.00 0.00 0.00 00:15:56.552 =================================================================================================================== 00:15:56.552 Total : 18315.17 71.54 0.00 0.00 0.00 0.00 0.00 00:15:56.552 00:15:57.492 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.492 Nvme0n1 : 7.00 18342.00 71.65 0.00 0.00 0.00 0.00 0.00 00:15:57.492 =================================================================================================================== 00:15:57.492 Total : 18342.00 71.65 0.00 0.00 0.00 0.00 0.00 00:15:57.492 00:15:58.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.437 Nvme0n1 : 8.00 18360.38 71.72 0.00 0.00 0.00 0.00 0.00 00:15:58.437 =================================================================================================================== 00:15:58.437 Total : 18360.38 71.72 0.00 0.00 0.00 0.00 0.00 00:15:58.437 00:15:59.382 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.382 Nvme0n1 : 9.00 18376.22 71.78 0.00 0.00 0.00 0.00 0.00 00:15:59.382 =================================================================================================================== 00:15:59.382 Total : 18376.22 71.78 0.00 0.00 0.00 0.00 0.00 00:15:59.382 00:16:00.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.326 Nvme0n1 : 10.00 18387.50 71.83 0.00 0.00 0.00 0.00 0.00 00:16:00.326 =================================================================================================================== 00:16:00.326 Total : 18387.50 71.83 0.00 0.00 0.00 0.00 0.00 00:16:00.326 00:16:00.326 00:16:00.326 Latency(us) 00:16:00.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.326 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.326 Nvme0n1 : 10.00 18391.24 71.84 0.00 0.00 6956.72 4642.13 16602.45 00:16:00.326 =================================================================================================================== 00:16:00.326 Total : 18391.24 71.84 0.00 0.00 6956.72 4642.13 16602.45 00:16:00.326 0 00:16:00.326 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1372884 00:16:00.326 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1372884 ']' 00:16:00.326 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1372884 00:16:00.326 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:16:00.326 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.326 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1372884 00:16:00.588 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:00.588 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:00.588 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1372884' 00:16:00.588 killing process with pid 1372884 00:16:00.588 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1372884 00:16:00.588 Received shutdown signal, test time was about 10.000000 seconds 00:16:00.588 00:16:00.588 Latency(us) 00:16:00.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.588 =================================================================================================================== 00:16:00.588 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:00.588 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1372884 00:16:00.588 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:00.848 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:00.848 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:00.848 19:12:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1369391 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1369391 00:16:01.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1369391 Killed "${NVMF_APP[@]}" "$@" 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1375345 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1375345 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1375345 ']' 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:01.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:01.109 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:01.109 [2024-07-12 19:12:07.166925] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:01.109 [2024-07-12 19:12:07.166981] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:01.109 EAL: No free 2048 kB hugepages reported on node 1 00:16:01.109 [2024-07-12 19:12:07.232449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.370 [2024-07-12 19:12:07.298796] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:01.370 [2024-07-12 19:12:07.298831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:01.370 [2024-07-12 19:12:07.298842] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:01.370 [2024-07-12 19:12:07.298848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:01.370 [2024-07-12 19:12:07.298853] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:01.370 [2024-07-12 19:12:07.298872] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.941 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.941 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:01.942 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:01.942 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:01.942 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:01.942 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:01.942 19:12:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:02.202 [2024-07-12 19:12:08.115685] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:02.202 [2024-07-12 19:12:08.115771] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:02.202 [2024-07-12 19:12:08.115800] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:02.202 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:16:02.202 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c983ceda-2bd8-49c1-9c8e-9182d466847f 00:16:02.202 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c983ceda-2bd8-49c1-9c8e-9182d466847f 00:16:02.202 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:02.202 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:02.203 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:02.203 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:02.203 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:02.203 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c983ceda-2bd8-49c1-9c8e-9182d466847f -t 2000 00:16:02.464 [ 00:16:02.464 { 00:16:02.464 "name": "c983ceda-2bd8-49c1-9c8e-9182d466847f", 00:16:02.464 "aliases": [ 00:16:02.464 "lvs/lvol" 00:16:02.464 ], 00:16:02.464 "product_name": "Logical Volume", 00:16:02.464 "block_size": 4096, 00:16:02.464 "num_blocks": 38912, 00:16:02.464 "uuid": "c983ceda-2bd8-49c1-9c8e-9182d466847f", 00:16:02.464 "assigned_rate_limits": { 00:16:02.464 "rw_ios_per_sec": 0, 00:16:02.464 "rw_mbytes_per_sec": 0, 00:16:02.464 "r_mbytes_per_sec": 0, 00:16:02.464 "w_mbytes_per_sec": 0 00:16:02.464 }, 00:16:02.464 "claimed": false, 00:16:02.464 "zoned": false, 00:16:02.464 "supported_io_types": { 00:16:02.464 "read": true, 00:16:02.464 "write": true, 00:16:02.464 "unmap": true, 00:16:02.464 "flush": false, 00:16:02.464 "reset": true, 00:16:02.464 "nvme_admin": false, 00:16:02.464 "nvme_io": false, 00:16:02.464 "nvme_io_md": false, 00:16:02.464 "write_zeroes": true, 00:16:02.464 "zcopy": false, 00:16:02.464 "get_zone_info": false, 00:16:02.464 "zone_management": false, 00:16:02.464 "zone_append": false, 00:16:02.464 "compare": false, 00:16:02.464 "compare_and_write": false, 00:16:02.464 "abort": false, 00:16:02.464 "seek_hole": true, 00:16:02.464 "seek_data": true, 00:16:02.464 "copy": false, 00:16:02.464 "nvme_iov_md": false 00:16:02.464 }, 00:16:02.464 "driver_specific": { 00:16:02.464 "lvol": { 00:16:02.464 "lvol_store_uuid": "6c7b3685-6335-41a2-8c4f-e86cca1a99b8", 00:16:02.464 "base_bdev": "aio_bdev", 00:16:02.464 "thin_provision": false, 00:16:02.464 "num_allocated_clusters": 38, 00:16:02.464 "snapshot": false, 00:16:02.464 "clone": false, 00:16:02.464 "esnap_clone": false 00:16:02.464 } 00:16:02.464 } 00:16:02.464 } 00:16:02.464 ] 00:16:02.464 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:02.464 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:02.464 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:16:02.725 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:16:02.725 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:02.725 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:16:02.725 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:16:02.725 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:02.986 [2024-07-12 19:12:08.915685] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:02.986 19:12:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:03.247 request: 00:16:03.247 { 00:16:03.247 "uuid": "6c7b3685-6335-41a2-8c4f-e86cca1a99b8", 00:16:03.247 "method": "bdev_lvol_get_lvstores", 00:16:03.247 "req_id": 1 00:16:03.247 } 00:16:03.247 Got JSON-RPC error response 00:16:03.247 response: 00:16:03.247 { 00:16:03.247 "code": -19, 00:16:03.247 "message": "No such device" 00:16:03.247 } 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:03.247 aio_bdev 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c983ceda-2bd8-49c1-9c8e-9182d466847f 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c983ceda-2bd8-49c1-9c8e-9182d466847f 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:03.247 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:03.508 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c983ceda-2bd8-49c1-9c8e-9182d466847f -t 2000 00:16:03.508 [ 00:16:03.508 { 00:16:03.508 "name": "c983ceda-2bd8-49c1-9c8e-9182d466847f", 00:16:03.508 "aliases": [ 00:16:03.508 "lvs/lvol" 00:16:03.508 ], 00:16:03.508 "product_name": "Logical Volume", 00:16:03.508 "block_size": 4096, 00:16:03.508 "num_blocks": 38912, 00:16:03.508 "uuid": "c983ceda-2bd8-49c1-9c8e-9182d466847f", 00:16:03.508 "assigned_rate_limits": { 00:16:03.508 "rw_ios_per_sec": 0, 00:16:03.508 "rw_mbytes_per_sec": 0, 00:16:03.508 "r_mbytes_per_sec": 0, 00:16:03.508 "w_mbytes_per_sec": 0 00:16:03.508 }, 00:16:03.508 "claimed": false, 00:16:03.508 "zoned": false, 00:16:03.508 "supported_io_types": { 00:16:03.508 "read": true, 00:16:03.508 "write": true, 00:16:03.508 "unmap": true, 00:16:03.508 "flush": false, 00:16:03.508 "reset": true, 00:16:03.508 "nvme_admin": false, 00:16:03.508 "nvme_io": false, 00:16:03.508 "nvme_io_md": false, 00:16:03.508 "write_zeroes": true, 00:16:03.508 "zcopy": false, 00:16:03.508 "get_zone_info": false, 00:16:03.508 "zone_management": false, 00:16:03.508 "zone_append": false, 00:16:03.508 "compare": false, 00:16:03.508 "compare_and_write": false, 00:16:03.508 "abort": false, 00:16:03.508 "seek_hole": true, 00:16:03.508 "seek_data": true, 00:16:03.508 "copy": false, 00:16:03.508 "nvme_iov_md": false 00:16:03.508 }, 00:16:03.508 "driver_specific": { 00:16:03.508 "lvol": { 00:16:03.508 "lvol_store_uuid": "6c7b3685-6335-41a2-8c4f-e86cca1a99b8", 00:16:03.508 "base_bdev": "aio_bdev", 00:16:03.508 "thin_provision": false, 00:16:03.508 "num_allocated_clusters": 38, 00:16:03.508 "snapshot": false, 00:16:03.508 "clone": false, 00:16:03.508 "esnap_clone": false 00:16:03.508 } 00:16:03.508 } 00:16:03.508 } 00:16:03.508 ] 00:16:03.768 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:16:03.768 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:03.768 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:03.768 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:03.768 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:03.768 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:04.029 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:04.029 19:12:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c983ceda-2bd8-49c1-9c8e-9182d466847f 00:16:04.029 19:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6c7b3685-6335-41a2-8c4f-e86cca1a99b8 00:16:04.290 19:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:04.551 19:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:04.551 00:16:04.551 real 0m17.082s 00:16:04.551 user 0m44.396s 00:16:04.551 sys 0m2.821s 00:16:04.551 19:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:04.551 19:12:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:04.551 ************************************ 00:16:04.551 END TEST lvs_grow_dirty 00:16:04.551 ************************************ 00:16:04.551 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:04.551 19:12:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:04.551 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:16:04.551 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:16:04.551 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:04.552 nvmf_trace.0 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:04.552 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:04.552 rmmod nvme_tcp 00:16:04.552 rmmod nvme_fabrics 00:16:04.552 rmmod nvme_keyring 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1375345 ']' 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1375345 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1375345 ']' 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1375345 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1375345 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1375345' 00:16:04.814 killing process with pid 1375345 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1375345 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1375345 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:04.814 19:12:10 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.361 19:12:12 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:07.361 00:16:07.361 real 0m43.227s 00:16:07.361 user 1m5.466s 00:16:07.361 sys 0m9.841s 00:16:07.361 19:12:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.361 19:12:12 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:07.361 ************************************ 00:16:07.361 END TEST nvmf_lvs_grow 00:16:07.361 ************************************ 00:16:07.361 19:12:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:07.361 19:12:13 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:07.361 19:12:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:07.361 19:12:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.361 19:12:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.361 ************************************ 00:16:07.361 START TEST nvmf_bdev_io_wait 00:16:07.361 ************************************ 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:07.361 * Looking for test storage... 00:16:07.361 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:07.361 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.362 19:12:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:13.952 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:13.952 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:13.952 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:13.952 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:13.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:13.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.522 ms 00:16:13.952 00:16:13.952 --- 10.0.0.2 ping statistics --- 00:16:13.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.952 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:13.952 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:13.952 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:16:13.952 00:16:13.952 --- 10.0.0.1 ping statistics --- 00:16:13.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:13.952 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1380683 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1380683 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1380683 ']' 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:13.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:13.952 19:12:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:13.952 [2024-07-12 19:12:19.763088] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:13.952 [2024-07-12 19:12:19.763142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:13.952 EAL: No free 2048 kB hugepages reported on node 1 00:16:13.952 [2024-07-12 19:12:19.828008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:13.952 [2024-07-12 19:12:19.895407] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:13.952 [2024-07-12 19:12:19.895441] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:13.953 [2024-07-12 19:12:19.895449] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:13.953 [2024-07-12 19:12:19.895456] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:13.953 [2024-07-12 19:12:19.895464] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:13.953 [2024-07-12 19:12:19.895598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.953 [2024-07-12 19:12:19.895711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.953 [2024-07-12 19:12:19.895866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:13.953 [2024-07-12 19:12:19.895866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:14.525 [2024-07-12 19:12:20.639921] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.525 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 Malloc0 00:16:14.787 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.787 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:14.787 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.787 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:14.787 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.787 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:14.788 [2024-07-12 19:12:20.708386] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1380873 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1380875 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:14.788 { 00:16:14.788 "params": { 00:16:14.788 "name": "Nvme$subsystem", 00:16:14.788 "trtype": "$TEST_TRANSPORT", 00:16:14.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.788 "adrfam": "ipv4", 00:16:14.788 "trsvcid": "$NVMF_PORT", 00:16:14.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.788 "hdgst": ${hdgst:-false}, 00:16:14.788 "ddgst": ${ddgst:-false} 00:16:14.788 }, 00:16:14.788 "method": "bdev_nvme_attach_controller" 00:16:14.788 } 00:16:14.788 EOF 00:16:14.788 )") 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1380877 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1380880 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:14.788 { 00:16:14.788 "params": { 00:16:14.788 "name": "Nvme$subsystem", 00:16:14.788 "trtype": "$TEST_TRANSPORT", 00:16:14.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.788 "adrfam": "ipv4", 00:16:14.788 "trsvcid": "$NVMF_PORT", 00:16:14.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.788 "hdgst": ${hdgst:-false}, 00:16:14.788 "ddgst": ${ddgst:-false} 00:16:14.788 }, 00:16:14.788 "method": "bdev_nvme_attach_controller" 00:16:14.788 } 00:16:14.788 EOF 00:16:14.788 )") 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:14.788 { 00:16:14.788 "params": { 00:16:14.788 "name": "Nvme$subsystem", 00:16:14.788 "trtype": "$TEST_TRANSPORT", 00:16:14.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.788 "adrfam": "ipv4", 00:16:14.788 "trsvcid": "$NVMF_PORT", 00:16:14.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.788 "hdgst": ${hdgst:-false}, 00:16:14.788 "ddgst": ${ddgst:-false} 00:16:14.788 }, 00:16:14.788 "method": "bdev_nvme_attach_controller" 00:16:14.788 } 00:16:14.788 EOF 00:16:14.788 )") 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:14.788 { 00:16:14.788 "params": { 00:16:14.788 "name": "Nvme$subsystem", 00:16:14.788 "trtype": "$TEST_TRANSPORT", 00:16:14.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:14.788 "adrfam": "ipv4", 00:16:14.788 "trsvcid": "$NVMF_PORT", 00:16:14.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:14.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:14.788 "hdgst": ${hdgst:-false}, 00:16:14.788 "ddgst": ${ddgst:-false} 00:16:14.788 }, 00:16:14.788 "method": "bdev_nvme_attach_controller" 00:16:14.788 } 00:16:14.788 EOF 00:16:14.788 )") 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1380873 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:14.788 "params": { 00:16:14.788 "name": "Nvme1", 00:16:14.788 "trtype": "tcp", 00:16:14.788 "traddr": "10.0.0.2", 00:16:14.788 "adrfam": "ipv4", 00:16:14.788 "trsvcid": "4420", 00:16:14.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.788 "hdgst": false, 00:16:14.788 "ddgst": false 00:16:14.788 }, 00:16:14.788 "method": "bdev_nvme_attach_controller" 00:16:14.788 }' 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:14.788 "params": { 00:16:14.788 "name": "Nvme1", 00:16:14.788 "trtype": "tcp", 00:16:14.788 "traddr": "10.0.0.2", 00:16:14.788 "adrfam": "ipv4", 00:16:14.788 "trsvcid": "4420", 00:16:14.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.788 "hdgst": false, 00:16:14.788 "ddgst": false 00:16:14.788 }, 00:16:14.788 "method": "bdev_nvme_attach_controller" 00:16:14.788 }' 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:14.788 "params": { 00:16:14.788 "name": "Nvme1", 00:16:14.788 "trtype": "tcp", 00:16:14.788 "traddr": "10.0.0.2", 00:16:14.788 "adrfam": "ipv4", 00:16:14.788 "trsvcid": "4420", 00:16:14.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.788 "hdgst": false, 00:16:14.788 "ddgst": false 00:16:14.788 }, 00:16:14.788 "method": "bdev_nvme_attach_controller" 00:16:14.788 }' 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:16:14.788 19:12:20 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:14.788 "params": { 00:16:14.788 "name": "Nvme1", 00:16:14.788 "trtype": "tcp", 00:16:14.788 "traddr": "10.0.0.2", 00:16:14.788 "adrfam": "ipv4", 00:16:14.788 "trsvcid": "4420", 00:16:14.788 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:14.788 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:14.788 "hdgst": false, 00:16:14.788 "ddgst": false 00:16:14.788 }, 00:16:14.788 "method": "bdev_nvme_attach_controller" 00:16:14.788 }' 00:16:14.788 [2024-07-12 19:12:20.761021] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:14.788 [2024-07-12 19:12:20.761075] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:14.788 [2024-07-12 19:12:20.763345] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:14.788 [2024-07-12 19:12:20.763350] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:14.788 [2024-07-12 19:12:20.763352] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:14.788 [2024-07-12 19:12:20.763394] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-12 19:12:20.763395] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-07-12 19:12:20.763396] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:14.788 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:14.788 --proc-type=auto ] 00:16:14.788 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.788 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.788 EAL: No free 2048 kB hugepages reported on node 1 00:16:14.788 [2024-07-12 19:12:20.902384] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.049 EAL: No free 2048 kB hugepages reported on node 1 00:16:15.049 [2024-07-12 19:12:20.946397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.049 [2024-07-12 19:12:20.954127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:15.049 [2024-07-12 19:12:20.984285] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.049 [2024-07-12 19:12:20.997606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:16:15.049 [2024-07-12 19:12:21.030958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:15.049 [2024-07-12 19:12:21.034905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:16:15.049 [2024-07-12 19:12:21.081056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:16:15.049 Running I/O for 1 seconds... 00:16:15.311 Running I/O for 1 seconds... 00:16:15.311 Running I/O for 1 seconds... 00:16:15.311 Running I/O for 1 seconds... 00:16:16.254 00:16:16.254 Latency(us) 00:16:16.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.254 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:16.254 Nvme1n1 : 1.00 15476.42 60.45 0.00 0.00 8248.26 4396.37 16384.00 00:16:16.254 =================================================================================================================== 00:16:16.254 Total : 15476.42 60.45 0.00 0.00 8248.26 4396.37 16384.00 00:16:16.254 00:16:16.254 Latency(us) 00:16:16.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.254 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:16.254 Nvme1n1 : 1.01 7827.77 30.58 0.00 0.00 16281.23 7536.64 27306.67 00:16:16.254 =================================================================================================================== 00:16:16.254 Total : 7827.77 30.58 0.00 0.00 16281.23 7536.64 27306.67 00:16:16.254 00:16:16.254 Latency(us) 00:16:16.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.254 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:16.254 Nvme1n1 : 1.00 137199.01 535.93 0.00 0.00 929.06 435.20 1611.09 00:16:16.254 =================================================================================================================== 00:16:16.254 Total : 137199.01 535.93 0.00 0.00 929.06 435.20 1611.09 00:16:16.254 00:16:16.254 Latency(us) 00:16:16.254 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.254 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:16.254 Nvme1n1 : 1.01 12423.79 48.53 0.00 0.00 10264.26 5570.56 15947.09 00:16:16.254 =================================================================================================================== 00:16:16.254 Total : 12423.79 48.53 0.00 0.00 10264.26 5570.56 15947.09 00:16:16.254 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1380875 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1380877 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1380880 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:16.516 rmmod nvme_tcp 00:16:16.516 rmmod nvme_fabrics 00:16:16.516 rmmod nvme_keyring 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1380683 ']' 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1380683 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1380683 ']' 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1380683 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1380683 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1380683' 00:16:16.516 killing process with pid 1380683 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1380683 00:16:16.516 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1380683 00:16:16.777 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:16.777 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:16.777 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:16.777 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.777 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:16.777 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.777 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.777 19:12:22 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.323 19:12:24 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:19.323 00:16:19.323 real 0m11.799s 00:16:19.323 user 0m17.589s 00:16:19.323 sys 0m6.597s 00:16:19.323 19:12:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:19.323 19:12:24 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:19.323 ************************************ 00:16:19.323 END TEST nvmf_bdev_io_wait 00:16:19.323 ************************************ 00:16:19.323 19:12:24 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:19.323 19:12:24 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:19.323 19:12:24 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:19.323 19:12:24 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:19.323 19:12:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:19.323 ************************************ 00:16:19.323 START TEST nvmf_queue_depth 00:16:19.323 ************************************ 00:16:19.323 19:12:24 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:19.323 * Looking for test storage... 00:16:19.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:19.323 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:19.324 19:12:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:25.975 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:25.976 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:25.976 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:25.976 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:25.976 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:25.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.506 ms 00:16:25.976 00:16:25.976 --- 10.0.0.2 ping statistics --- 00:16:25.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.976 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:25.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:16:25.976 00:16:25.976 --- 10.0.0.1 ping statistics --- 00:16:25.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.976 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1385232 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1385232 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1385232 ']' 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.976 19:12:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:25.976 [2024-07-12 19:12:31.661992] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:25.976 [2024-07-12 19:12:31.662059] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:25.976 EAL: No free 2048 kB hugepages reported on node 1 00:16:25.976 [2024-07-12 19:12:31.749270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.976 [2024-07-12 19:12:31.842006] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:25.976 [2024-07-12 19:12:31.842060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:25.976 [2024-07-12 19:12:31.842068] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:25.976 [2024-07-12 19:12:31.842075] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:25.976 [2024-07-12 19:12:31.842081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:25.976 [2024-07-12 19:12:31.842106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.549 [2024-07-12 19:12:32.474851] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.549 Malloc0 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.549 [2024-07-12 19:12:32.550966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1385576 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1385576 /var/tmp/bdevperf.sock 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1385576 ']' 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:26.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:26.549 19:12:32 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:26.549 [2024-07-12 19:12:32.603806] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:26.549 [2024-07-12 19:12:32.603859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1385576 ] 00:16:26.549 EAL: No free 2048 kB hugepages reported on node 1 00:16:26.549 [2024-07-12 19:12:32.665109] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:26.810 [2024-07-12 19:12:32.735709] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.380 19:12:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:27.380 19:12:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:27.380 19:12:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:27.380 19:12:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.380 19:12:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:27.640 NVMe0n1 00:16:27.640 19:12:33 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.640 19:12:33 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:27.640 Running I/O for 10 seconds... 00:16:37.634 00:16:37.634 Latency(us) 00:16:37.634 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.634 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:37.634 Verification LBA range: start 0x0 length 0x4000 00:16:37.634 NVMe0n1 : 10.07 11581.45 45.24 0.00 0.00 88113.72 24357.55 72963.41 00:16:37.634 =================================================================================================================== 00:16:37.634 Total : 11581.45 45.24 0.00 0.00 88113.72 24357.55 72963.41 00:16:37.634 0 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1385576 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1385576 ']' 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1385576 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1385576 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1385576' 00:16:37.894 killing process with pid 1385576 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1385576 00:16:37.894 Received shutdown signal, test time was about 10.000000 seconds 00:16:37.894 00:16:37.894 Latency(us) 00:16:37.894 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:37.894 =================================================================================================================== 00:16:37.894 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1385576 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:37.894 19:12:43 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:37.894 rmmod nvme_tcp 00:16:37.894 rmmod nvme_fabrics 00:16:37.894 rmmod nvme_keyring 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1385232 ']' 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1385232 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1385232 ']' 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1385232 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1385232 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1385232' 00:16:38.155 killing process with pid 1385232 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1385232 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1385232 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:38.155 19:12:44 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.702 19:12:46 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:40.702 00:16:40.702 real 0m21.376s 00:16:40.702 user 0m25.389s 00:16:40.702 sys 0m6.111s 00:16:40.702 19:12:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.702 19:12:46 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:40.702 ************************************ 00:16:40.702 END TEST nvmf_queue_depth 00:16:40.702 ************************************ 00:16:40.702 19:12:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:40.702 19:12:46 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:40.702 19:12:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:40.702 19:12:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.702 19:12:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:40.702 ************************************ 00:16:40.702 START TEST nvmf_target_multipath 00:16:40.702 ************************************ 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:40.702 * Looking for test storage... 00:16:40.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:40.702 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:40.703 19:12:46 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:48.849 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:48.849 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:48.849 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:48.849 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:48.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:48.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:16:48.849 00:16:48.849 --- 10.0.0.2 ping statistics --- 00:16:48.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.849 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:48.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:48.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.359 ms 00:16:48.849 00:16:48.849 --- 10.0.0.1 ping statistics --- 00:16:48.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:48.849 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:48.849 19:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:48.849 only one NIC for nvmf test 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:48.850 rmmod nvme_tcp 00:16:48.850 rmmod nvme_fabrics 00:16:48.850 rmmod nvme_keyring 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:48.850 19:12:53 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:49.792 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:50.054 00:16:50.054 real 0m9.579s 00:16:50.054 user 0m2.088s 00:16:50.054 sys 0m5.413s 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:50.054 19:12:55 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:50.054 ************************************ 00:16:50.054 END TEST nvmf_target_multipath 00:16:50.054 ************************************ 00:16:50.054 19:12:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:50.055 19:12:55 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:50.055 19:12:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:50.055 19:12:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:50.055 19:12:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:50.055 ************************************ 00:16:50.055 START TEST nvmf_zcopy 00:16:50.055 ************************************ 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:50.055 * Looking for test storage... 00:16:50.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:50.055 19:12:56 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.200 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:58.200 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:58.200 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:58.200 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:16:58.201 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:16:58.201 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:16:58.201 Found net devices under 0000:4b:00.0: cvl_0_0 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:16:58.201 Found net devices under 0000:4b:00.1: cvl_0_1 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:58.201 19:13:02 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:58.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:58.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.592 ms 00:16:58.201 00:16:58.201 --- 10.0.0.2 ping statistics --- 00:16:58.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.201 rtt min/avg/max/mdev = 0.592/0.592/0.592/0.000 ms 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:58.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:58.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.377 ms 00:16:58.201 00:16:58.201 --- 10.0.0.1 ping statistics --- 00:16:58.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:58.201 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1395924 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1395924 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1395924 ']' 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:58.201 19:13:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.201 [2024-07-12 19:13:03.298218] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:58.201 [2024-07-12 19:13:03.298279] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:58.201 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.201 [2024-07-12 19:13:03.380686] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.202 [2024-07-12 19:13:03.448050] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:58.202 [2024-07-12 19:13:03.448092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:58.202 [2024-07-12 19:13:03.448102] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:58.202 [2024-07-12 19:13:03.448109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:58.202 [2024-07-12 19:13:03.448115] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:58.202 [2024-07-12 19:13:03.448141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.202 [2024-07-12 19:13:04.107757] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.202 [2024-07-12 19:13:04.123935] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.202 malloc0 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:58.202 { 00:16:58.202 "params": { 00:16:58.202 "name": "Nvme$subsystem", 00:16:58.202 "trtype": "$TEST_TRANSPORT", 00:16:58.202 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:58.202 "adrfam": "ipv4", 00:16:58.202 "trsvcid": "$NVMF_PORT", 00:16:58.202 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:58.202 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:58.202 "hdgst": ${hdgst:-false}, 00:16:58.202 "ddgst": ${ddgst:-false} 00:16:58.202 }, 00:16:58.202 "method": "bdev_nvme_attach_controller" 00:16:58.202 } 00:16:58.202 EOF 00:16:58.202 )") 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:58.202 19:13:04 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:58.202 "params": { 00:16:58.202 "name": "Nvme1", 00:16:58.202 "trtype": "tcp", 00:16:58.202 "traddr": "10.0.0.2", 00:16:58.202 "adrfam": "ipv4", 00:16:58.202 "trsvcid": "4420", 00:16:58.202 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:58.202 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:58.202 "hdgst": false, 00:16:58.202 "ddgst": false 00:16:58.202 }, 00:16:58.202 "method": "bdev_nvme_attach_controller" 00:16:58.202 }' 00:16:58.202 [2024-07-12 19:13:04.211608] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:16:58.202 [2024-07-12 19:13:04.211669] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1396244 ] 00:16:58.202 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.202 [2024-07-12 19:13:04.274477] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:58.463 [2024-07-12 19:13:04.348289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:58.463 Running I/O for 10 seconds... 00:17:08.463 00:17:08.463 Latency(us) 00:17:08.463 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.463 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:08.463 Verification LBA range: start 0x0 length 0x1000 00:17:08.463 Nvme1n1 : 10.01 9358.82 73.12 0.00 0.00 13624.94 1167.36 32112.64 00:17:08.463 =================================================================================================================== 00:17:08.463 Total : 9358.82 73.12 0.00 0.00 13624.94 1167.36 32112.64 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1398254 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:08.725 [2024-07-12 19:13:14.660904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.660932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:08.725 { 00:17:08.725 "params": { 00:17:08.725 "name": "Nvme$subsystem", 00:17:08.725 "trtype": "$TEST_TRANSPORT", 00:17:08.725 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:08.725 "adrfam": "ipv4", 00:17:08.725 "trsvcid": "$NVMF_PORT", 00:17:08.725 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:08.725 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:08.725 "hdgst": ${hdgst:-false}, 00:17:08.725 "ddgst": ${ddgst:-false} 00:17:08.725 }, 00:17:08.725 "method": "bdev_nvme_attach_controller" 00:17:08.725 } 00:17:08.725 EOF 00:17:08.725 )") 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:08.725 [2024-07-12 19:13:14.668892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.668904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:08.725 19:13:14 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:08.725 "params": { 00:17:08.725 "name": "Nvme1", 00:17:08.725 "trtype": "tcp", 00:17:08.725 "traddr": "10.0.0.2", 00:17:08.725 "adrfam": "ipv4", 00:17:08.725 "trsvcid": "4420", 00:17:08.725 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:08.725 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:08.725 "hdgst": false, 00:17:08.725 "ddgst": false 00:17:08.725 }, 00:17:08.725 "method": "bdev_nvme_attach_controller" 00:17:08.725 }' 00:17:08.725 [2024-07-12 19:13:14.676909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.676918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.684928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.684935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.692948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.692956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.700968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.700975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.703271] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:17:08.725 [2024-07-12 19:13:14.703317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1398254 ] 00:17:08.725 [2024-07-12 19:13:14.708988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.708996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.717009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.717017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.725030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.725037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 EAL: No free 2048 kB hugepages reported on node 1 00:17:08.725 [2024-07-12 19:13:14.733050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.733058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.741071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.741079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.749092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.749099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.757111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.757118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.760397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.725 [2024-07-12 19:13:14.765136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.765144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.773155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.773163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.781173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.781181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.789193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.789201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.797214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.797226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.805234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.805241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.813254] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.813262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.821274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.821282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.824108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.725 [2024-07-12 19:13:14.829295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.829302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.837321] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.837330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.845341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.845356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.725 [2024-07-12 19:13:14.853361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.725 [2024-07-12 19:13:14.853369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.861379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.861388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.869401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.869408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.877421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.877429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.885442] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.885449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.893463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.893471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.901492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.901505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.909508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.909517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.917529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.917537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.925550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.925560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.933570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.933579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.941590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.941599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.949609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.949617] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.957628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.957636] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.965648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.965656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.973668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.973676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.981691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.981701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.989709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.989717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:14.997732] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:14.997739] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.005754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.005762] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.013775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.013782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.021796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.021804] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.029816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.029826] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.037837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.037844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.045857] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.045864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.053878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.053886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.061898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.061905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.069919] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.069927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.077941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.077948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.085967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.085981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 Running I/O for 5 seconds... 00:17:08.987 [2024-07-12 19:13:15.093984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.093991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.106090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.106106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:08.987 [2024-07-12 19:13:15.114540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:08.987 [2024-07-12 19:13:15.114555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.123155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.123170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.131872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.131888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.140300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.140315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.149000] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.149015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.157612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.157627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.166661] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.166676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.175844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.175859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.184665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.184680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.193185] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.193200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.201964] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.201981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.210872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.210888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.219486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.219502] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.228242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.228257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.236744] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.236759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.245598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.245612] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.254677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.254693] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.263610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.263624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.272103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.272117] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.280938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.280952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.289971] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.289986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.299053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.299068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.307366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.307381] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.316353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.316368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.325471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.325485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.334508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.334523] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.342839] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.342853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.351622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.351637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.360541] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.360556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.368615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.368630] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.248 [2024-07-12 19:13:15.377274] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.248 [2024-07-12 19:13:15.377288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.386236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.386251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.395120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.395140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.404073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.404090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.413288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.413303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.421576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.421590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.430331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.430346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.439469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.439484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.447845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.447860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.456681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.456696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.465262] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.465277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.474246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.474260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.483194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.483212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.491808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.491823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.500894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.500909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.509892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.509908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.518288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.518303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.527240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.527255] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.536448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.536463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.545526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.545541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.554361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.509 [2024-07-12 19:13:15.554376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.509 [2024-07-12 19:13:15.563477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.563491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.510 [2024-07-12 19:13:15.572628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.572643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.510 [2024-07-12 19:13:15.581677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.581692] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.510 [2024-07-12 19:13:15.590330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.590346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.510 [2024-07-12 19:13:15.599455] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.599471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.510 [2024-07-12 19:13:15.607881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.607895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.510 [2024-07-12 19:13:15.616888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.616903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.510 [2024-07-12 19:13:15.625719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.625735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.510 [2024-07-12 19:13:15.634567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.510 [2024-07-12 19:13:15.634582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.643182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.643197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.652163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.652182] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.660456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.660470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.669031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.669046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.677905] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.677920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.686923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.686938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.694851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.694866] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.704187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.704202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.712437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.712453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.770 [2024-07-12 19:13:15.720723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.770 [2024-07-12 19:13:15.720738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.729490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.729505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.738406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.738421] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.746652] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.746667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.755574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.755589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.764711] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.764726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.773117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.773138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.781402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.781417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.789850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.789864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.798561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.798576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.807349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.807364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.816347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.816365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.824938] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.824952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.833890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.833905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.842329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.842344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.850595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.850610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.859612] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.859627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.868111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.868132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.876508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.876522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.885520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.885535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:09.771 [2024-07-12 19:13:15.893873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:09.771 [2024-07-12 19:13:15.893887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.902575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.902590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.911187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.911202] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.919421] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.919435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.928513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.928527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.936897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.936911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.946101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.946116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.954974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.954988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.963507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.963522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.972200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.972215] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.980972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.980990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.989773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.989787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:15.998199] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:15.998214] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.007225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.007239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.016021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.016035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.025026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.025040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.033929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.033943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.042387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.042402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.051275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.051289] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.060151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.060165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.068586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.068600] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.077433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.077448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.086048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.086063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.094439] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.094453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.103405] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.103419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.112003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.112018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.120644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.120659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.129186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.129201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.137011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.137025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.146129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.146143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.032 [2024-07-12 19:13:16.155008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.032 [2024-07-12 19:13:16.155022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.293 [2024-07-12 19:13:16.163410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.293 [2024-07-12 19:13:16.163425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.293 [2024-07-12 19:13:16.172256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.293 [2024-07-12 19:13:16.172270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.293 [2024-07-12 19:13:16.179822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.293 [2024-07-12 19:13:16.179836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.293 [2024-07-12 19:13:16.188931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.188946] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.197502] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.197516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.206387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.206401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.215273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.215287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.223727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.223741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.232854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.232869] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.241163] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.241177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.249963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.249977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.258387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.258401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.267264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.267278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.276227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.276242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.284803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.284818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.293403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.293417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.302036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.302051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.311002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.311016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.319493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.319508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.328317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.328332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.337610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.337624] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.346432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.346446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.355172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.355186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.364271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.364285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.372706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.372721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.381583] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.381598] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.389957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.389971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.398452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.398465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.407362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.407376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.294 [2024-07-12 19:13:16.416095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.294 [2024-07-12 19:13:16.416109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.424705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.424720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.433530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.433544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.442116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.442133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.451085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.451100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.459608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.459623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.467973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.467987] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.477054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.477068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.485433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.485448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.494016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.494031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.502790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.502805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.511722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.511736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.520140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.520154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.528498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.528512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.537175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.537189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.546110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.546127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.555171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.555185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.563767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.563782] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.572120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.572138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.580561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.580575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.589624] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.589639] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.597280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.597294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.606214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.606228] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.614713] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.614728] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.623721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.623736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.632179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.632194] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.641173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.641188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.649727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.555 [2024-07-12 19:13:16.649742] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.555 [2024-07-12 19:13:16.658159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.556 [2024-07-12 19:13:16.658173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.556 [2024-07-12 19:13:16.666684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.556 [2024-07-12 19:13:16.666699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.556 [2024-07-12 19:13:16.675648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.556 [2024-07-12 19:13:16.675663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.556 [2024-07-12 19:13:16.684582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.556 [2024-07-12 19:13:16.684596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.693579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.693594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.702426] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.702440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.711422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.711437] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.720440] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.720454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.729393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.729408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.737579] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.737594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.746451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.746465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.754955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.754969] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.764049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.764063] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.772500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.772515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.781437] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.781452] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.790246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.790260] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.798773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.798791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.807540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.807554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.816047] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.816061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.824649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.824664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.833378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.833392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.842053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.842067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.851132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.851147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.860091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.860106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.869073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.869088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.878008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.878023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.886666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.886681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.895350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.895365] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.903848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.903863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.912662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.912677] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.920989] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.921004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.929514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.929529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.938318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.938333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:10.817 [2024-07-12 19:13:16.947069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:10.817 [2024-07-12 19:13:16.947083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:16.955682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:16.955698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:16.964537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:16.964562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:16.973478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:16.973493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:16.982218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:16.982233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:16.990936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:16.990951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:16.999013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:16.999028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.007665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.007680] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.016872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.016887] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.025261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.025276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.033800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.033815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.042671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.042686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.051751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.051766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.060743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.060758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.069548] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.069562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.078574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.078589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.086756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.086770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.095868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.095884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.104987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.105001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.114018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.114033] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.122806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.122820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.131072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.131090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.140290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.140304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.148319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.148334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.156799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.156814] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.165643] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.165658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.174358] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.174373] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.182757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.182773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.191429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.191445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.078 [2024-07-12 19:13:17.200258] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.078 [2024-07-12 19:13:17.200273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.339 [2024-07-12 19:13:17.208955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.339 [2024-07-12 19:13:17.208970] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.339 [2024-07-12 19:13:17.217933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.217948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.226459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.226473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.235400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.235415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.244053] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.244068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.252973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.252988] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.261591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.261605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.269849] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.269864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.278916] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.278930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.287804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.287818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.296533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.296553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.304931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.304945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.313946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.313961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.322932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.322948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.331300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.331315] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.340104] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.340119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.348866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.348882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.357787] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.357802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.366353] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.366368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.375379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.375394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.383921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.383935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.392878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.392892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.402026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.402042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.411024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.411040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.419702] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.419717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.428503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.428518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.436917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.436932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.445464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.445479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.454255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.454270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.340 [2024-07-12 19:13:17.463074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.340 [2024-07-12 19:13:17.463089] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.601 [2024-07-12 19:13:17.471499] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.601 [2024-07-12 19:13:17.471514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.601 [2024-07-12 19:13:17.480160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.601 [2024-07-12 19:13:17.480175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.601 [2024-07-12 19:13:17.488708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.601 [2024-07-12 19:13:17.488723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.497626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.497641] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.506435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.506451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.515128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.515143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.524027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.524042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.532430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.532444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.540765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.540780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.549671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.549686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.558154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.558169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.566431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.566446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.575456] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.575470] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.583283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.583298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.591980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.591994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.601046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.601061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.609961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.609975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.618618] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.618632] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.627620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.627635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.636315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.636329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.645328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.645342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.653803] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.653818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.662888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.662903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.671987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.672003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.679995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.680009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.688429] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.688443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.696930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.696944] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.705448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.705462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.714387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.714402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.723308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.723323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.602 [2024-07-12 19:13:17.732027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.602 [2024-07-12 19:13:17.732041] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.740647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.740661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.749384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.749398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.758322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.758337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.767086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.767099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.775098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.775112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.783717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.783731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.791997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.792011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.800475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.800489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.808651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.808665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.817160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.817174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.826498] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.826512] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.835451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.835464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.843944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.843959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.853033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.853048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.861387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.861401] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.870177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.870192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.879374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.879388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.888378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.888392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.896860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.896874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.905774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.905789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.914252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.914267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.923157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.923171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.931752] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.931767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.940332] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.940347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.949247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.949262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.958234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.958248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.967354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.967368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.976459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.976473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.985347] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.985361] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:11.864 [2024-07-12 19:13:17.993659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:11.864 [2024-07-12 19:13:17.993673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.002552] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.002566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.011633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.011647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.020751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.020766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.029180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.029195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.037449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.037463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.045708] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.045722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.054125] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.054139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.063141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.063156] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.072070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.072085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.081057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.081071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.089829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.089844] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.098506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.098520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.106929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.106943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.115754] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.115772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.124046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.124060] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.132891] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.132906] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.140675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.140689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.149592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.149606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.157709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.157723] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.166318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.166332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.175001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.175015] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.184025] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.184038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.192705] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.192720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.201288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.201303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.209792] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.209806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.218780] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.218795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.227208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.227222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.235910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.235924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.244802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.244816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.125 [2024-07-12 19:13:18.253045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.125 [2024-07-12 19:13:18.253059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.261775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.261789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.270556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.270571] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.279062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.279080] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.287404] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.287419] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.296166] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.296181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.305037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.305051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.313458] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.313472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.322152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.322167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.331234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.331249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.340056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.340070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.348715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.348729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.357117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.357135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.365928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.365943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.374505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.374519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.382831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.382845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.391748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.391763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.400547] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.400561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.408280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.408296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.417615] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.417629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.425745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.425759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.434209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.434223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.443140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.443158] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.451513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.451528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.460687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.460701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.469545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.469559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.478011] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.478026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.486877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.486891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.495352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.495366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.503824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.503838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.387 [2024-07-12 19:13:18.512870] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.387 [2024-07-12 19:13:18.512884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.520822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.520837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.529961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.529975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.538266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.538280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.547373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.547388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.560331] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.560347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.568078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.568092] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.576957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.576971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.585430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.585445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.594282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.648 [2024-07-12 19:13:18.594297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.648 [2024-07-12 19:13:18.602663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.602678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.611266] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.611285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.620299] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.620314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.628950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.628965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.637105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.637120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.645464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.645479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.654352] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.654367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.663141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.663157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.672288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.672303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.679988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.680003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.688684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.688699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.697292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.697307] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.706034] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.706049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.714944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.714959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.723543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.723557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.731885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.731900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.740133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.740148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.749092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.749107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.757715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.757731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.765997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.766011] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.649 [2024-07-12 19:13:18.774433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.649 [2024-07-12 19:13:18.774448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.910 [2024-07-12 19:13:18.783248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.910 [2024-07-12 19:13:18.783263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.910 [2024-07-12 19:13:18.792156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.910 [2024-07-12 19:13:18.792171] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.910 [2024-07-12 19:13:18.800553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.910 [2024-07-12 19:13:18.800569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.910 [2024-07-12 19:13:18.809365] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.910 [2024-07-12 19:13:18.809379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.910 [2024-07-12 19:13:18.817975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.910 [2024-07-12 19:13:18.817990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.910 [2024-07-12 19:13:18.826493] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.910 [2024-07-12 19:13:18.826508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.910 [2024-07-12 19:13:18.835278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.910 [2024-07-12 19:13:18.835293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.844119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.844138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.852873] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.852888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.861514] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.861529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.870195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.870209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.879057] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.879072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.887500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.887514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.896379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.896394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.905329] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.905343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.914348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.914363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.922862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.922878] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.931535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.931550] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.940088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.940102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.948529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.948544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.957106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.957126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.965944] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.965959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.974523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.974538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.983285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.983300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:18.992022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:18.992036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:19.001001] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:19.001016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:19.009967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:19.009982] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:19.018328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:19.018343] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:19.026416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:19.026432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:12.911 [2024-07-12 19:13:19.035029] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:12.911 [2024-07-12 19:13:19.035043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.043629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.043644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.052311] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.052326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.060868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.060883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.069415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.069430] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.078157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.078172] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.087019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.087034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.096098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.096113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.104631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.104646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.113598] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.113614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.121796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.121811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.130540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.130556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.139250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.139265] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.147660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.147675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.156341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.156356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.165042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.165057] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.173625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.173640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.182682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.182697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.190348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.190362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.199731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.199747] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.208164] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.208180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.217079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.217093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.225572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.225587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.234414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.234429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.243128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.243143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.251937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.251952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.260054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.260069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.268906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.268921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.277668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.277683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.286795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.286810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.173 [2024-07-12 19:13:19.295478] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.173 [2024-07-12 19:13:19.295492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.434 [2024-07-12 19:13:19.304608] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.434 [2024-07-12 19:13:19.304623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.313710] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.313724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.322067] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.322081] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.330718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.330732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.339300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.339314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.347688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.347702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.356578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.356592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.365373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.365387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.374435] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.374450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.383194] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.383209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.392288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.392304] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.401225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.401239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.410218] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.410233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.419127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.419141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.428066] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.428084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.436465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.436479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.445486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.445501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.453281] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.453296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.462172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.462186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.471228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.471242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.479653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.479667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.488211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.488226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.496782] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.496797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.505402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.505417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.513794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.513809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.522669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.522683] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.531373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.531387] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.539776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.539790] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.548469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.548485] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.435 [2024-07-12 19:13:19.557545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.435 [2024-07-12 19:13:19.557559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.565886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.565902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.574506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.574521] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.583070] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.583084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.591559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.591578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.600133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.600147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.608699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.608713] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.617425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.617440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.626341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.626355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.634810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.634824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.643044] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.643058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.651706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.651721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.659883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.659897] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.668846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.668861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.677198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.677212] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.685939] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.685953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.694571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.694587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.703571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.703586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.712512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.712527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.720948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.720963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.729523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.729538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.737992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.738007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.746225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.746239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.754765] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.754783] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.763083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.763097] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.772050] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.772064] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.696 [2024-07-12 19:13:19.780182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.696 [2024-07-12 19:13:19.780196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-12 19:13:19.789223] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-12 19:13:19.789237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-12 19:13:19.798226] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-12 19:13:19.798240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-12 19:13:19.806622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-12 19:13:19.806637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-12 19:13:19.815453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-12 19:13:19.815468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.697 [2024-07-12 19:13:19.824322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.697 [2024-07-12 19:13:19.824337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.833048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.833065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.841844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.841859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.850654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.850669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.859225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.859240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.867774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.867788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.876391] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.876405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.884788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.884802] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.893485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.893499] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.902375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.902390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.910674] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.910689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.919530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.919549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.928509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.928524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.937576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.937590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.945988] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.946002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.954535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.954549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.963061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.963077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.972002] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.972016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.972 [2024-07-12 19:13:19.980675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.972 [2024-07-12 19:13:19.980689] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:19.989182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:19.989196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:19.997972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:19.997986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.004398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.004413] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.014203] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.014218] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.024073] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.024094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.031524] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.031544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.042290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.042309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.052160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.052178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.059753] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.059771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.070871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.070895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.079515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.079532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:13.973 [2024-07-12 19:13:20.088463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:13.973 [2024-07-12 19:13:20.088479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.268 [2024-07-12 19:13:20.096885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.096900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.105900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.105915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.111630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.111644] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 00:17:14.269 Latency(us) 00:17:14.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:14.269 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:14.269 Nvme1n1 : 5.01 19400.51 151.57 0.00 0.00 6591.54 2389.33 18022.40 00:17:14.269 =================================================================================================================== 00:17:14.269 Total : 19400.51 151.57 0.00 0.00 6591.54 2389.33 18022.40 00:17:14.269 [2024-07-12 19:13:20.119647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.119658] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.127666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.127676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.135688] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.135696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.143712] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.143721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.151730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.151740] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.159749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.159758] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.167768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.167776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.175790] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.175798] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.183810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.183817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.191829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.191836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.199850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.199858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.207871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.207879] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.215890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.215898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.223913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.223922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.231932] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.231940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 [2024-07-12 19:13:20.239952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:14.269 [2024-07-12 19:13:20.239959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:14.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1398254) - No such process 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1398254 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:14.269 delay0 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.269 19:13:20 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:14.269 EAL: No free 2048 kB hugepages reported on node 1 00:17:14.269 [2024-07-12 19:13:20.374705] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:20.859 Initializing NVMe Controllers 00:17:20.859 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:20.859 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:20.859 Initialization complete. Launching workers. 00:17:20.859 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 195 00:17:20.859 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 482, failed to submit 33 00:17:20.859 success 324, unsuccess 158, failed 0 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:20.859 rmmod nvme_tcp 00:17:20.859 rmmod nvme_fabrics 00:17:20.859 rmmod nvme_keyring 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1395924 ']' 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1395924 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1395924 ']' 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1395924 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1395924 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1395924' 00:17:20.859 killing process with pid 1395924 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1395924 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1395924 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:20.859 19:13:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.802 19:13:28 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:22.802 00:17:22.802 real 0m32.810s 00:17:22.802 user 0m44.920s 00:17:22.802 sys 0m10.074s 00:17:22.802 19:13:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:22.802 19:13:28 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:22.802 ************************************ 00:17:22.802 END TEST nvmf_zcopy 00:17:22.802 ************************************ 00:17:22.802 19:13:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:22.802 19:13:28 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:22.802 19:13:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:22.802 19:13:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:22.802 19:13:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:22.802 ************************************ 00:17:22.802 START TEST nvmf_nmic 00:17:22.802 ************************************ 00:17:22.802 19:13:28 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:23.064 * Looking for test storage... 00:17:23.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.064 19:13:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.065 19:13:29 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.653 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:29.654 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:29.654 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:29.654 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:29.654 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.654 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.915 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.915 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.915 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:29.915 19:13:35 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.915 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.915 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.915 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:29.915 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.915 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.460 ms 00:17:29.915 00:17:29.915 --- 10.0.0.2 ping statistics --- 00:17:29.915 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.915 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:17:29.915 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:30.176 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.176 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:17:30.176 00:17:30.176 --- 10.0.0.1 ping statistics --- 00:17:30.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.176 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1404626 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1404626 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1404626 ']' 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:30.176 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:30.176 [2024-07-12 19:13:36.155503] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:17:30.176 [2024-07-12 19:13:36.155566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.176 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.176 [2024-07-12 19:13:36.226044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:30.176 [2024-07-12 19:13:36.302781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.176 [2024-07-12 19:13:36.302818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.176 [2024-07-12 19:13:36.302828] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.176 [2024-07-12 19:13:36.302834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.176 [2024-07-12 19:13:36.302840] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.176 [2024-07-12 19:13:36.302974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.176 [2024-07-12 19:13:36.303112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:30.176 [2024-07-12 19:13:36.303269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:30.176 [2024-07-12 19:13:36.303270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 [2024-07-12 19:13:36.981793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.118 19:13:36 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 Malloc0 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 [2024-07-12 19:13:37.041147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:31.118 test case1: single bdev can't be used in multiple subsystems 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.118 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.119 [2024-07-12 19:13:37.077075] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:31.119 [2024-07-12 19:13:37.077094] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:31.119 [2024-07-12 19:13:37.077102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:31.119 request: 00:17:31.119 { 00:17:31.119 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:31.119 "namespace": { 00:17:31.119 "bdev_name": "Malloc0", 00:17:31.119 "no_auto_visible": false 00:17:31.119 }, 00:17:31.119 "method": "nvmf_subsystem_add_ns", 00:17:31.119 "req_id": 1 00:17:31.119 } 00:17:31.119 Got JSON-RPC error response 00:17:31.119 response: 00:17:31.119 { 00:17:31.119 "code": -32602, 00:17:31.119 "message": "Invalid parameters" 00:17:31.119 } 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:31.119 Adding namespace failed - expected result. 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:31.119 test case2: host connect to nvmf target in multiple paths 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:31.119 [2024-07-12 19:13:37.089203] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.119 19:13:37 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:33.031 19:13:38 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:34.415 19:13:40 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:34.415 19:13:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:34.415 19:13:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:34.415 19:13:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:34.415 19:13:40 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:36.326 19:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:36.326 19:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:36.326 19:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:36.326 19:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:36.326 19:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:36.326 19:13:42 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:36.326 19:13:42 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:36.326 [global] 00:17:36.326 thread=1 00:17:36.326 invalidate=1 00:17:36.326 rw=write 00:17:36.326 time_based=1 00:17:36.326 runtime=1 00:17:36.326 ioengine=libaio 00:17:36.326 direct=1 00:17:36.326 bs=4096 00:17:36.326 iodepth=1 00:17:36.326 norandommap=0 00:17:36.326 numjobs=1 00:17:36.326 00:17:36.326 verify_dump=1 00:17:36.326 verify_backlog=512 00:17:36.326 verify_state_save=0 00:17:36.326 do_verify=1 00:17:36.326 verify=crc32c-intel 00:17:36.326 [job0] 00:17:36.326 filename=/dev/nvme0n1 00:17:36.326 Could not set queue depth (nvme0n1) 00:17:36.586 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:36.586 fio-3.35 00:17:36.586 Starting 1 thread 00:17:37.978 00:17:37.978 job0: (groupid=0, jobs=1): err= 0: pid=1406148: Fri Jul 12 19:13:43 2024 00:17:37.978 read: IOPS=19, BW=78.1KiB/s (80.0kB/s)(80.0KiB/1024msec) 00:17:37.978 slat (nsec): min=25735, max=27750, avg=26321.10, stdev=504.96 00:17:37.978 clat (usec): min=800, max=41206, avg=36967.69, stdev=12357.85 00:17:37.978 lat (usec): min=825, max=41234, avg=36994.01, stdev=12357.90 00:17:37.978 clat percentiles (usec): 00:17:37.978 | 1.00th=[ 799], 5.00th=[ 799], 10.00th=[ 865], 20.00th=[41157], 00:17:37.978 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:37.978 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:37.978 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:17:37.978 | 99.99th=[41157] 00:17:37.978 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:17:37.978 slat (usec): min=9, max=26855, avg=83.66, stdev=1185.49 00:17:37.978 clat (usec): min=181, max=695, avg=463.86, stdev=95.65 00:17:37.978 lat (usec): min=191, max=27218, avg=547.53, stdev=1185.23 00:17:37.978 clat percentiles (usec): 00:17:37.978 | 1.00th=[ 255], 5.00th=[ 302], 10.00th=[ 347], 20.00th=[ 367], 00:17:37.978 | 30.00th=[ 412], 40.00th=[ 449], 50.00th=[ 465], 60.00th=[ 490], 00:17:37.978 | 70.00th=[ 519], 80.00th=[ 553], 90.00th=[ 586], 95.00th=[ 619], 00:17:37.978 | 99.00th=[ 660], 99.50th=[ 660], 99.90th=[ 693], 99.95th=[ 693], 00:17:37.978 | 99.99th=[ 693] 00:17:37.978 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:37.978 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:37.978 lat (usec) : 250=0.38%, 500=61.47%, 750=34.40%, 1000=0.38% 00:17:37.978 lat (msec) : 50=3.38% 00:17:37.978 cpu : usr=0.78%, sys=2.25%, ctx=535, majf=0, minf=1 00:17:37.978 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:37.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.978 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:37.978 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:37.978 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:37.978 00:17:37.978 Run status group 0 (all jobs): 00:17:37.978 READ: bw=78.1KiB/s (80.0kB/s), 78.1KiB/s-78.1KiB/s (80.0kB/s-80.0kB/s), io=80.0KiB (81.9kB), run=1024-1024msec 00:17:37.978 WRITE: bw=2000KiB/s (2048kB/s), 2000KiB/s-2000KiB/s (2048kB/s-2048kB/s), io=2048KiB (2097kB), run=1024-1024msec 00:17:37.978 00:17:37.978 Disk stats (read/write): 00:17:37.978 nvme0n1: ios=68/512, merge=0/0, ticks=1200/205, in_queue=1405, util=98.70% 00:17:37.978 19:13:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:37.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:37.978 19:13:43 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:37.978 19:13:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:37.978 19:13:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:37.978 19:13:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:37.978 19:13:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:37.978 19:13:43 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:37.978 rmmod nvme_tcp 00:17:37.978 rmmod nvme_fabrics 00:17:37.978 rmmod nvme_keyring 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1404626 ']' 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1404626 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1404626 ']' 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1404626 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:37.978 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1404626 00:17:38.239 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:38.239 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:38.239 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1404626' 00:17:38.239 killing process with pid 1404626 00:17:38.239 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1404626 00:17:38.239 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1404626 00:17:38.239 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:38.240 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:38.240 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:38.240 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:38.240 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:38.240 19:13:44 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.240 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.240 19:13:44 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.787 19:13:46 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:40.787 00:17:40.787 real 0m17.423s 00:17:40.787 user 0m45.560s 00:17:40.787 sys 0m6.053s 00:17:40.787 19:13:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:40.787 19:13:46 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:40.787 ************************************ 00:17:40.787 END TEST nvmf_nmic 00:17:40.787 ************************************ 00:17:40.787 19:13:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:40.787 19:13:46 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:40.787 19:13:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:40.787 19:13:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:40.787 19:13:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:40.787 ************************************ 00:17:40.787 START TEST nvmf_fio_target 00:17:40.787 ************************************ 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:40.787 * Looking for test storage... 00:17:40.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:40.787 19:13:46 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:17:47.380 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:17:47.380 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:17:47.380 Found net devices under 0000:4b:00.0: cvl_0_0 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:17:47.380 Found net devices under 0000:4b:00.1: cvl_0_1 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:47.380 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:47.642 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:47.642 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.547 ms 00:17:47.642 00:17:47.642 --- 10.0.0.2 ping statistics --- 00:17:47.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.642 rtt min/avg/max/mdev = 0.547/0.547/0.547/0.000 ms 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:47.642 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:47.642 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:17:47.642 00:17:47.642 --- 10.0.0.1 ping statistics --- 00:17:47.642 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:47.642 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1410504 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1410504 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1410504 ']' 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:47.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:47.642 19:13:53 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.642 [2024-07-12 19:13:53.756925] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:17:47.642 [2024-07-12 19:13:53.756979] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:47.903 EAL: No free 2048 kB hugepages reported on node 1 00:17:47.903 [2024-07-12 19:13:53.826174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:47.903 [2024-07-12 19:13:53.894620] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:47.903 [2024-07-12 19:13:53.894657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:47.903 [2024-07-12 19:13:53.894665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:47.903 [2024-07-12 19:13:53.894672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:47.903 [2024-07-12 19:13:53.894677] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:47.903 [2024-07-12 19:13:53.894811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:47.903 [2024-07-12 19:13:53.894926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:47.903 [2024-07-12 19:13:53.895081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.903 [2024-07-12 19:13:53.895082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:48.475 19:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:48.475 19:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:48.475 19:13:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:48.475 19:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:48.475 19:13:54 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.475 19:13:54 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.475 19:13:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:48.735 [2024-07-12 19:13:54.709146] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.735 19:13:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:48.996 19:13:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:48.996 19:13:54 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:48.996 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:48.996 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.256 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:49.256 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.517 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:49.517 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:49.517 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:49.778 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:49.778 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:50.038 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:50.038 19:13:55 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:50.038 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:50.038 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:50.299 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:50.559 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:50.559 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:50.559 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:50.559 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:50.820 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.080 [2024-07-12 19:13:56.966545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.080 19:13:56 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:51.080 19:13:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:51.341 19:13:57 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:53.254 19:13:58 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:53.254 19:13:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:53.254 19:13:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:53.254 19:13:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:53.254 19:13:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:53.254 19:13:58 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:55.233 19:14:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:55.233 19:14:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:55.233 19:14:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:55.233 19:14:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:55.233 19:14:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:55.233 19:14:00 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:55.233 19:14:00 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:55.233 [global] 00:17:55.233 thread=1 00:17:55.233 invalidate=1 00:17:55.233 rw=write 00:17:55.233 time_based=1 00:17:55.233 runtime=1 00:17:55.233 ioengine=libaio 00:17:55.233 direct=1 00:17:55.233 bs=4096 00:17:55.233 iodepth=1 00:17:55.233 norandommap=0 00:17:55.233 numjobs=1 00:17:55.233 00:17:55.233 verify_dump=1 00:17:55.233 verify_backlog=512 00:17:55.233 verify_state_save=0 00:17:55.233 do_verify=1 00:17:55.233 verify=crc32c-intel 00:17:55.233 [job0] 00:17:55.233 filename=/dev/nvme0n1 00:17:55.233 [job1] 00:17:55.233 filename=/dev/nvme0n2 00:17:55.233 [job2] 00:17:55.233 filename=/dev/nvme0n3 00:17:55.233 [job3] 00:17:55.233 filename=/dev/nvme0n4 00:17:55.233 Could not set queue depth (nvme0n1) 00:17:55.233 Could not set queue depth (nvme0n2) 00:17:55.234 Could not set queue depth (nvme0n3) 00:17:55.234 Could not set queue depth (nvme0n4) 00:17:55.494 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.494 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.494 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.494 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:55.494 fio-3.35 00:17:55.494 Starting 4 threads 00:17:56.876 00:17:56.876 job0: (groupid=0, jobs=1): err= 0: pid=1412396: Fri Jul 12 19:14:02 2024 00:17:56.876 read: IOPS=376, BW=1506KiB/s (1542kB/s)(1524KiB/1012msec) 00:17:56.876 slat (nsec): min=6098, max=55247, avg=23391.58, stdev=7656.23 00:17:56.876 clat (usec): min=451, max=42417, avg=1862.80, stdev=6599.80 00:17:56.876 lat (usec): min=458, max=42459, avg=1886.19, stdev=6600.49 00:17:56.876 clat percentiles (usec): 00:17:56.876 | 1.00th=[ 562], 5.00th=[ 627], 10.00th=[ 660], 20.00th=[ 709], 00:17:56.876 | 30.00th=[ 742], 40.00th=[ 775], 50.00th=[ 791], 60.00th=[ 816], 00:17:56.876 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 881], 95.00th=[ 930], 00:17:56.876 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.876 | 99.99th=[42206] 00:17:56.876 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:17:56.877 slat (nsec): min=8535, max=65874, avg=29100.36, stdev=10157.35 00:17:56.877 clat (usec): min=145, max=844, avg=529.40, stdev=118.36 00:17:56.877 lat (usec): min=154, max=858, avg=558.50, stdev=123.99 00:17:56.877 clat percentiles (usec): 00:17:56.877 | 1.00th=[ 245], 5.00th=[ 297], 10.00th=[ 351], 20.00th=[ 429], 00:17:56.877 | 30.00th=[ 486], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 578], 00:17:56.877 | 70.00th=[ 611], 80.00th=[ 627], 90.00th=[ 660], 95.00th=[ 693], 00:17:56.877 | 99.00th=[ 750], 99.50th=[ 791], 99.90th=[ 848], 99.95th=[ 848], 00:17:56.877 | 99.99th=[ 848] 00:17:56.877 bw ( KiB/s): min= 4096, max= 4096, per=50.60%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.877 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.877 lat (usec) : 250=0.67%, 500=18.70%, 750=51.06%, 1000=28.44% 00:17:56.877 lat (msec) : 50=1.12% 00:17:56.877 cpu : usr=1.78%, sys=3.07%, ctx=893, majf=0, minf=1 00:17:56.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.877 issued rwts: total=381,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.877 job1: (groupid=0, jobs=1): err= 0: pid=1412403: Fri Jul 12 19:14:02 2024 00:17:56.877 read: IOPS=291, BW=1167KiB/s (1195kB/s)(1168KiB/1001msec) 00:17:56.877 slat (nsec): min=6106, max=54224, avg=23882.06, stdev=4395.23 00:17:56.877 clat (usec): min=685, max=42441, avg=2169.35, stdev=6697.89 00:17:56.877 lat (usec): min=709, max=42449, avg=2193.23, stdev=6697.60 00:17:56.877 clat percentiles (usec): 00:17:56.877 | 1.00th=[ 725], 5.00th=[ 824], 10.00th=[ 881], 20.00th=[ 922], 00:17:56.877 | 30.00th=[ 963], 40.00th=[ 1012], 50.00th=[ 1045], 60.00th=[ 1090], 00:17:56.877 | 70.00th=[ 1139], 80.00th=[ 1205], 90.00th=[ 1254], 95.00th=[ 1287], 00:17:56.877 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.877 | 99.99th=[42206] 00:17:56.877 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:56.877 slat (nsec): min=9414, max=49390, avg=26900.21, stdev=9124.15 00:17:56.877 clat (usec): min=345, max=1028, avg=665.28, stdev=117.47 00:17:56.877 lat (usec): min=376, max=1059, avg=692.18, stdev=121.58 00:17:56.877 clat percentiles (usec): 00:17:56.877 | 1.00th=[ 400], 5.00th=[ 433], 10.00th=[ 510], 20.00th=[ 578], 00:17:56.877 | 30.00th=[ 611], 40.00th=[ 644], 50.00th=[ 668], 60.00th=[ 701], 00:17:56.877 | 70.00th=[ 734], 80.00th=[ 758], 90.00th=[ 807], 95.00th=[ 848], 00:17:56.877 | 99.00th=[ 963], 99.50th=[ 1020], 99.90th=[ 1029], 99.95th=[ 1029], 00:17:56.877 | 99.99th=[ 1029] 00:17:56.877 bw ( KiB/s): min= 4096, max= 4096, per=50.60%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.877 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.877 lat (usec) : 500=5.10%, 750=44.53%, 1000=27.11% 00:17:56.877 lat (msec) : 2=22.26%, 50=1.00% 00:17:56.877 cpu : usr=1.40%, sys=1.80%, ctx=804, majf=0, minf=1 00:17:56.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.877 issued rwts: total=292,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.877 job2: (groupid=0, jobs=1): err= 0: pid=1412404: Fri Jul 12 19:14:02 2024 00:17:56.877 read: IOPS=42, BW=172KiB/s (176kB/s)(172KiB/1001msec) 00:17:56.877 slat (nsec): min=10109, max=25864, avg=25021.47, stdev=2543.08 00:17:56.877 clat (usec): min=976, max=42077, avg=13535.00, stdev=18898.83 00:17:56.877 lat (usec): min=1002, max=42087, avg=13560.02, stdev=18898.05 00:17:56.877 clat percentiles (usec): 00:17:56.877 | 1.00th=[ 979], 5.00th=[ 1057], 10.00th=[ 1074], 20.00th=[ 1221], 00:17:56.877 | 30.00th=[ 1237], 40.00th=[ 1287], 50.00th=[ 1303], 60.00th=[ 1352], 00:17:56.877 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:56.877 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.877 | 99.99th=[42206] 00:17:56.877 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:56.877 slat (nsec): min=10238, max=56180, avg=31490.88, stdev=8502.07 00:17:56.877 clat (usec): min=351, max=1025, avg=776.77, stdev=111.06 00:17:56.877 lat (usec): min=378, max=1059, avg=808.26, stdev=114.42 00:17:56.877 clat percentiles (usec): 00:17:56.877 | 1.00th=[ 469], 5.00th=[ 578], 10.00th=[ 627], 20.00th=[ 693], 00:17:56.877 | 30.00th=[ 734], 40.00th=[ 758], 50.00th=[ 791], 60.00th=[ 824], 00:17:56.877 | 70.00th=[ 840], 80.00th=[ 865], 90.00th=[ 898], 95.00th=[ 930], 00:17:56.877 | 99.00th=[ 979], 99.50th=[ 1004], 99.90th=[ 1029], 99.95th=[ 1029], 00:17:56.877 | 99.99th=[ 1029] 00:17:56.877 bw ( KiB/s): min= 4096, max= 4096, per=50.60%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.877 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.877 lat (usec) : 500=1.98%, 750=31.71%, 1000=58.38% 00:17:56.877 lat (msec) : 2=5.59%, 50=2.34% 00:17:56.877 cpu : usr=0.80%, sys=1.70%, ctx=557, majf=0, minf=1 00:17:56.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.877 issued rwts: total=43,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.877 job3: (groupid=0, jobs=1): err= 0: pid=1412405: Fri Jul 12 19:14:02 2024 00:17:56.877 read: IOPS=338, BW=1353KiB/s (1386kB/s)(1356KiB/1002msec) 00:17:56.877 slat (nsec): min=6424, max=44708, avg=22610.90, stdev=6369.98 00:17:56.877 clat (usec): min=487, max=42271, avg=2019.20, stdev=6982.10 00:17:56.877 lat (usec): min=493, max=42295, avg=2041.81, stdev=6982.40 00:17:56.877 clat percentiles (usec): 00:17:56.877 | 1.00th=[ 529], 5.00th=[ 611], 10.00th=[ 668], 20.00th=[ 725], 00:17:56.877 | 30.00th=[ 758], 40.00th=[ 783], 50.00th=[ 832], 60.00th=[ 857], 00:17:56.877 | 70.00th=[ 873], 80.00th=[ 906], 90.00th=[ 922], 95.00th=[ 955], 00:17:56.877 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:56.877 | 99.99th=[42206] 00:17:56.877 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:17:56.877 slat (nsec): min=9454, max=93863, avg=29379.07, stdev=8027.28 00:17:56.877 clat (usec): min=201, max=780, avg=561.93, stdev=112.21 00:17:56.877 lat (usec): min=211, max=825, avg=591.30, stdev=115.03 00:17:56.877 clat percentiles (usec): 00:17:56.877 | 1.00th=[ 227], 5.00th=[ 383], 10.00th=[ 416], 20.00th=[ 465], 00:17:56.877 | 30.00th=[ 523], 40.00th=[ 537], 50.00th=[ 570], 60.00th=[ 603], 00:17:56.877 | 70.00th=[ 635], 80.00th=[ 660], 90.00th=[ 693], 95.00th=[ 717], 00:17:56.877 | 99.00th=[ 758], 99.50th=[ 775], 99.90th=[ 783], 99.95th=[ 783], 00:17:56.877 | 99.99th=[ 783] 00:17:56.877 bw ( KiB/s): min= 4096, max= 4096, per=50.60%, avg=4096.00, stdev= 0.00, samples=1 00:17:56.877 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:56.877 lat (usec) : 250=0.94%, 500=13.63%, 750=56.05%, 1000=27.85% 00:17:56.877 lat (msec) : 2=0.35%, 50=1.18% 00:17:56.877 cpu : usr=1.20%, sys=2.40%, ctx=852, majf=0, minf=1 00:17:56.877 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:56.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:56.877 issued rwts: total=339,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:56.877 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:56.877 00:17:56.877 Run status group 0 (all jobs): 00:17:56.877 READ: bw=4170KiB/s (4270kB/s), 172KiB/s-1506KiB/s (176kB/s-1542kB/s), io=4220KiB (4321kB), run=1001-1012msec 00:17:56.877 WRITE: bw=8095KiB/s (8289kB/s), 2024KiB/s-2046KiB/s (2072kB/s-2095kB/s), io=8192KiB (8389kB), run=1001-1012msec 00:17:56.877 00:17:56.877 Disk stats (read/write): 00:17:56.877 nvme0n1: ios=426/512, merge=0/0, ticks=735/203, in_queue=938, util=92.28% 00:17:56.877 nvme0n2: ios=244/512, merge=0/0, ticks=455/331, in_queue=786, util=86.21% 00:17:56.877 nvme0n3: ios=31/512, merge=0/0, ticks=1302/367, in_queue=1669, util=97.04% 00:17:56.877 nvme0n4: ios=335/512, merge=0/0, ticks=505/274, in_queue=779, util=89.52% 00:17:56.877 19:14:02 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:56.877 [global] 00:17:56.877 thread=1 00:17:56.877 invalidate=1 00:17:56.877 rw=randwrite 00:17:56.877 time_based=1 00:17:56.877 runtime=1 00:17:56.877 ioengine=libaio 00:17:56.877 direct=1 00:17:56.877 bs=4096 00:17:56.877 iodepth=1 00:17:56.877 norandommap=0 00:17:56.877 numjobs=1 00:17:56.877 00:17:56.877 verify_dump=1 00:17:56.877 verify_backlog=512 00:17:56.877 verify_state_save=0 00:17:56.877 do_verify=1 00:17:56.877 verify=crc32c-intel 00:17:56.877 [job0] 00:17:56.877 filename=/dev/nvme0n1 00:17:56.877 [job1] 00:17:56.877 filename=/dev/nvme0n2 00:17:56.877 [job2] 00:17:56.877 filename=/dev/nvme0n3 00:17:56.877 [job3] 00:17:56.877 filename=/dev/nvme0n4 00:17:56.877 Could not set queue depth (nvme0n1) 00:17:56.877 Could not set queue depth (nvme0n2) 00:17:56.877 Could not set queue depth (nvme0n3) 00:17:56.877 Could not set queue depth (nvme0n4) 00:17:57.137 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.137 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.137 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.137 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:57.137 fio-3.35 00:17:57.137 Starting 4 threads 00:17:58.533 00:17:58.533 job0: (groupid=0, jobs=1): err= 0: pid=1412859: Fri Jul 12 19:14:04 2024 00:17:58.533 read: IOPS=383, BW=1534KiB/s (1570kB/s)(1572KiB/1025msec) 00:17:58.533 slat (nsec): min=25725, max=44495, avg=26397.88, stdev=1965.92 00:17:58.533 clat (usec): min=765, max=41993, avg=1505.98, stdev=4102.14 00:17:58.533 lat (usec): min=792, max=42019, avg=1532.38, stdev=4102.13 00:17:58.533 clat percentiles (usec): 00:17:58.533 | 1.00th=[ 840], 5.00th=[ 938], 10.00th=[ 979], 20.00th=[ 1029], 00:17:58.533 | 30.00th=[ 1057], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 00:17:58.533 | 70.00th=[ 1139], 80.00th=[ 1172], 90.00th=[ 1188], 95.00th=[ 1237], 00:17:58.533 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:58.533 | 99.99th=[42206] 00:17:58.533 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:17:58.533 slat (nsec): min=8700, max=65381, avg=29780.76, stdev=8725.32 00:17:58.533 clat (usec): min=477, max=1168, avg=781.07, stdev=112.11 00:17:58.533 lat (usec): min=488, max=1200, avg=810.85, stdev=114.83 00:17:58.533 clat percentiles (usec): 00:17:58.533 | 1.00th=[ 498], 5.00th=[ 603], 10.00th=[ 644], 20.00th=[ 693], 00:17:58.533 | 30.00th=[ 725], 40.00th=[ 758], 50.00th=[ 783], 60.00th=[ 807], 00:17:58.533 | 70.00th=[ 832], 80.00th=[ 865], 90.00th=[ 914], 95.00th=[ 979], 00:17:58.533 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1172], 00:17:58.533 | 99.99th=[ 1172] 00:17:58.533 bw ( KiB/s): min= 4096, max= 4096, per=51.32%, avg=4096.00, stdev= 0.00, samples=1 00:17:58.533 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:58.533 lat (usec) : 500=0.66%, 750=19.67%, 1000=40.77% 00:17:58.533 lat (msec) : 2=38.45%, 50=0.44% 00:17:58.533 cpu : usr=2.05%, sys=3.22%, ctx=905, majf=0, minf=1 00:17:58.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.533 issued rwts: total=393,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.533 job1: (groupid=0, jobs=1): err= 0: pid=1412878: Fri Jul 12 19:14:04 2024 00:17:58.533 read: IOPS=498, BW=1994KiB/s (2042kB/s)(1996KiB/1001msec) 00:17:58.533 slat (nsec): min=23684, max=54617, avg=25077.82, stdev=3443.06 00:17:58.533 clat (usec): min=899, max=1318, avg=1137.62, stdev=64.73 00:17:58.533 lat (usec): min=924, max=1343, avg=1162.69, stdev=65.00 00:17:58.533 clat percentiles (usec): 00:17:58.533 | 1.00th=[ 930], 5.00th=[ 1020], 10.00th=[ 1045], 20.00th=[ 1090], 00:17:58.533 | 30.00th=[ 1123], 40.00th=[ 1123], 50.00th=[ 1139], 60.00th=[ 1156], 00:17:58.533 | 70.00th=[ 1172], 80.00th=[ 1188], 90.00th=[ 1221], 95.00th=[ 1237], 00:17:58.533 | 99.00th=[ 1270], 99.50th=[ 1287], 99.90th=[ 1319], 99.95th=[ 1319], 00:17:58.533 | 99.99th=[ 1319] 00:17:58.533 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:17:58.533 slat (nsec): min=9444, max=62136, avg=27855.77, stdev=8376.41 00:17:58.533 clat (usec): min=412, max=1015, avg=777.06, stdev=99.41 00:17:58.533 lat (usec): min=424, max=1046, avg=804.91, stdev=103.00 00:17:58.533 clat percentiles (usec): 00:17:58.533 | 1.00th=[ 494], 5.00th=[ 578], 10.00th=[ 644], 20.00th=[ 701], 00:17:58.533 | 30.00th=[ 734], 40.00th=[ 766], 50.00th=[ 791], 60.00th=[ 816], 00:17:58.533 | 70.00th=[ 840], 80.00th=[ 857], 90.00th=[ 889], 95.00th=[ 906], 00:17:58.533 | 99.00th=[ 955], 99.50th=[ 979], 99.90th=[ 1020], 99.95th=[ 1020], 00:17:58.533 | 99.99th=[ 1020] 00:17:58.533 bw ( KiB/s): min= 4096, max= 4096, per=51.32%, avg=4096.00, stdev= 0.00, samples=1 00:17:58.533 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:58.533 lat (usec) : 500=0.59%, 750=17.51%, 1000=34.32% 00:17:58.533 lat (msec) : 2=47.58% 00:17:58.533 cpu : usr=1.60%, sys=2.70%, ctx=1011, majf=0, minf=1 00:17:58.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.533 issued rwts: total=499,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.534 job2: (groupid=0, jobs=1): err= 0: pid=1412897: Fri Jul 12 19:14:04 2024 00:17:58.534 read: IOPS=18, BW=73.1KiB/s (74.8kB/s)(76.0KiB/1040msec) 00:17:58.534 slat (nsec): min=24818, max=25635, avg=25149.47, stdev=216.92 00:17:58.534 clat (usec): min=897, max=42045, avg=39559.43, stdev=9372.07 00:17:58.534 lat (usec): min=923, max=42070, avg=39584.58, stdev=9371.97 00:17:58.534 clat percentiles (usec): 00:17:58.534 | 1.00th=[ 898], 5.00th=[ 898], 10.00th=[41157], 20.00th=[41157], 00:17:58.534 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:58.534 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:58.534 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:58.534 | 99.99th=[42206] 00:17:58.534 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:17:58.534 slat (nsec): min=9405, max=50355, avg=27644.88, stdev=9522.24 00:17:58.534 clat (usec): min=224, max=842, avg=526.73, stdev=116.24 00:17:58.534 lat (usec): min=236, max=875, avg=554.37, stdev=121.16 00:17:58.534 clat percentiles (usec): 00:17:58.534 | 1.00th=[ 262], 5.00th=[ 322], 10.00th=[ 363], 20.00th=[ 429], 00:17:58.534 | 30.00th=[ 465], 40.00th=[ 510], 50.00th=[ 529], 60.00th=[ 562], 00:17:58.534 | 70.00th=[ 594], 80.00th=[ 635], 90.00th=[ 676], 95.00th=[ 701], 00:17:58.534 | 99.00th=[ 783], 99.50th=[ 799], 99.90th=[ 840], 99.95th=[ 840], 00:17:58.534 | 99.99th=[ 840] 00:17:58.534 bw ( KiB/s): min= 4096, max= 4096, per=51.32%, avg=4096.00, stdev= 0.00, samples=1 00:17:58.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:58.534 lat (usec) : 250=0.94%, 500=35.97%, 750=57.63%, 1000=2.07% 00:17:58.534 lat (msec) : 50=3.39% 00:17:58.534 cpu : usr=1.15%, sys=0.96%, ctx=532, majf=0, minf=1 00:17:58.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.534 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.534 job3: (groupid=0, jobs=1): err= 0: pid=1412903: Fri Jul 12 19:14:04 2024 00:17:58.534 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:58.534 slat (nsec): min=7818, max=55878, avg=25597.73, stdev=2648.00 00:17:58.534 clat (usec): min=672, max=1519, avg=1127.82, stdev=177.03 00:17:58.534 lat (usec): min=697, max=1545, avg=1153.42, stdev=177.17 00:17:58.534 clat percentiles (usec): 00:17:58.534 | 1.00th=[ 758], 5.00th=[ 857], 10.00th=[ 906], 20.00th=[ 955], 00:17:58.534 | 30.00th=[ 988], 40.00th=[ 1020], 50.00th=[ 1205], 60.00th=[ 1254], 00:17:58.534 | 70.00th=[ 1270], 80.00th=[ 1303], 90.00th=[ 1319], 95.00th=[ 1336], 00:17:58.534 | 99.00th=[ 1401], 99.50th=[ 1418], 99.90th=[ 1516], 99.95th=[ 1516], 00:17:58.534 | 99.99th=[ 1516] 00:17:58.534 write: IOPS=538, BW=2154KiB/s (2206kB/s)(2156KiB/1001msec); 0 zone resets 00:17:58.534 slat (nsec): min=9562, max=53565, avg=28158.48, stdev=9316.89 00:17:58.534 clat (usec): min=272, max=1038, avg=715.11, stdev=121.70 00:17:58.534 lat (usec): min=291, max=1092, avg=743.27, stdev=125.47 00:17:58.534 clat percentiles (usec): 00:17:58.534 | 1.00th=[ 347], 5.00th=[ 494], 10.00th=[ 537], 20.00th=[ 619], 00:17:58.534 | 30.00th=[ 652], 40.00th=[ 709], 50.00th=[ 742], 60.00th=[ 766], 00:17:58.534 | 70.00th=[ 799], 80.00th=[ 816], 90.00th=[ 848], 95.00th=[ 873], 00:17:58.534 | 99.00th=[ 889], 99.50th=[ 947], 99.90th=[ 1037], 99.95th=[ 1037], 00:17:58.534 | 99.99th=[ 1037] 00:17:58.534 bw ( KiB/s): min= 4096, max= 4096, per=51.32%, avg=4096.00, stdev= 0.00, samples=1 00:17:58.534 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:58.534 lat (usec) : 500=3.04%, 750=24.55%, 1000=40.53% 00:17:58.534 lat (msec) : 2=31.87% 00:17:58.534 cpu : usr=1.70%, sys=2.80%, ctx=1052, majf=0, minf=1 00:17:58.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:58.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:58.534 issued rwts: total=512,539,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:58.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:58.534 00:17:58.534 Run status group 0 (all jobs): 00:17:58.534 READ: bw=5473KiB/s (5604kB/s), 73.1KiB/s-2046KiB/s (74.8kB/s-2095kB/s), io=5692KiB (5829kB), run=1001-1040msec 00:17:58.534 WRITE: bw=7981KiB/s (8172kB/s), 1969KiB/s-2154KiB/s (2016kB/s-2206kB/s), io=8300KiB (8499kB), run=1001-1040msec 00:17:58.534 00:17:58.534 Disk stats (read/write): 00:17:58.534 nvme0n1: ios=423/512, merge=0/0, ticks=861/331, in_queue=1192, util=95.99% 00:17:58.534 nvme0n2: ios=375/512, merge=0/0, ticks=745/370, in_queue=1115, util=90.32% 00:17:58.534 nvme0n3: ios=37/512, merge=0/0, ticks=1508/247, in_queue=1755, util=97.26% 00:17:58.534 nvme0n4: ios=405/512, merge=0/0, ticks=686/348, in_queue=1034, util=100.00% 00:17:58.534 19:14:04 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:58.534 [global] 00:17:58.534 thread=1 00:17:58.534 invalidate=1 00:17:58.534 rw=write 00:17:58.534 time_based=1 00:17:58.534 runtime=1 00:17:58.534 ioengine=libaio 00:17:58.534 direct=1 00:17:58.534 bs=4096 00:17:58.534 iodepth=128 00:17:58.534 norandommap=0 00:17:58.534 numjobs=1 00:17:58.534 00:17:58.534 verify_dump=1 00:17:58.534 verify_backlog=512 00:17:58.534 verify_state_save=0 00:17:58.534 do_verify=1 00:17:58.534 verify=crc32c-intel 00:17:58.534 [job0] 00:17:58.534 filename=/dev/nvme0n1 00:17:58.534 [job1] 00:17:58.534 filename=/dev/nvme0n2 00:17:58.534 [job2] 00:17:58.534 filename=/dev/nvme0n3 00:17:58.534 [job3] 00:17:58.534 filename=/dev/nvme0n4 00:17:58.534 Could not set queue depth (nvme0n1) 00:17:58.534 Could not set queue depth (nvme0n2) 00:17:58.534 Could not set queue depth (nvme0n3) 00:17:58.534 Could not set queue depth (nvme0n4) 00:17:58.793 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.793 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.793 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.793 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:58.793 fio-3.35 00:17:58.793 Starting 4 threads 00:18:00.176 00:18:00.176 job0: (groupid=0, jobs=1): err= 0: pid=1413410: Fri Jul 12 19:14:05 2024 00:18:00.176 read: IOPS=7514, BW=29.4MiB/s (30.8MB/s)(29.5MiB/1004msec) 00:18:00.176 slat (nsec): min=855, max=8256.0k, avg=58482.62, stdev=424854.79 00:18:00.176 clat (usec): min=925, max=23938, avg=8188.16, stdev=2997.28 00:18:00.176 lat (usec): min=1324, max=23947, avg=8246.64, stdev=3022.43 00:18:00.176 clat percentiles (usec): 00:18:00.176 | 1.00th=[ 2180], 5.00th=[ 4113], 10.00th=[ 5080], 20.00th=[ 5997], 00:18:00.176 | 30.00th=[ 6521], 40.00th=[ 6849], 50.00th=[ 7504], 60.00th=[ 8291], 00:18:00.176 | 70.00th=[ 9110], 80.00th=[10552], 90.00th=[12518], 95.00th=[13566], 00:18:00.176 | 99.00th=[17171], 99.50th=[18220], 99.90th=[19268], 99.95th=[19268], 00:18:00.176 | 99.99th=[23987] 00:18:00.176 write: IOPS=7649, BW=29.9MiB/s (31.3MB/s)(30.0MiB/1004msec); 0 zone resets 00:18:00.176 slat (nsec): min=1506, max=6062.7k, avg=61435.03, stdev=326257.15 00:18:00.176 clat (usec): min=926, max=18136, avg=8518.41, stdev=3145.73 00:18:00.176 lat (usec): min=951, max=18141, avg=8579.85, stdev=3164.37 00:18:00.176 clat percentiles (usec): 00:18:00.176 | 1.00th=[ 2442], 5.00th=[ 3982], 10.00th=[ 4948], 20.00th=[ 6063], 00:18:00.176 | 30.00th=[ 6718], 40.00th=[ 7373], 50.00th=[ 7898], 60.00th=[ 8586], 00:18:00.176 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[12911], 95.00th=[15139], 00:18:00.176 | 99.00th=[16712], 99.50th=[17171], 99.90th=[17695], 99.95th=[17957], 00:18:00.176 | 99.99th=[18220] 00:18:00.176 bw ( KiB/s): min=28672, max=32768, per=35.31%, avg=30720.00, stdev=2896.31, samples=2 00:18:00.176 iops : min= 7168, max= 8192, avg=7680.00, stdev=724.08, samples=2 00:18:00.176 lat (usec) : 1000=0.08% 00:18:00.176 lat (msec) : 2=0.56%, 4=3.99%, 10=69.81%, 20=25.54%, 50=0.01% 00:18:00.176 cpu : usr=5.48%, sys=6.38%, ctx=750, majf=0, minf=1 00:18:00.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:00.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.176 issued rwts: total=7545,7680,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.176 job1: (groupid=0, jobs=1): err= 0: pid=1413421: Fri Jul 12 19:14:05 2024 00:18:00.176 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:18:00.176 slat (nsec): min=931, max=28128k, avg=96992.95, stdev=721162.52 00:18:00.176 clat (usec): min=4628, max=43247, avg=12352.25, stdev=6485.38 00:18:00.176 lat (usec): min=4633, max=43254, avg=12449.24, stdev=6549.85 00:18:00.176 clat percentiles (usec): 00:18:00.176 | 1.00th=[ 5604], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7570], 00:18:00.176 | 30.00th=[ 8717], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[10945], 00:18:00.176 | 70.00th=[12780], 80.00th=[16712], 90.00th=[19792], 95.00th=[26608], 00:18:00.176 | 99.00th=[36963], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:18:00.176 | 99.99th=[43254] 00:18:00.176 write: IOPS=4667, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1004msec); 0 zone resets 00:18:00.176 slat (nsec): min=1601, max=42789k, avg=111211.43, stdev=1011469.50 00:18:00.176 clat (usec): min=338, max=88667, avg=12712.52, stdev=7670.52 00:18:00.176 lat (usec): min=1123, max=88714, avg=12823.74, stdev=7788.27 00:18:00.176 clat percentiles (usec): 00:18:00.176 | 1.00th=[ 3228], 5.00th=[ 4752], 10.00th=[ 5604], 20.00th=[ 6783], 00:18:00.176 | 30.00th=[ 8356], 40.00th=[ 9896], 50.00th=[11207], 60.00th=[12518], 00:18:00.176 | 70.00th=[14222], 80.00th=[15664], 90.00th=[21365], 95.00th=[30540], 00:18:00.176 | 99.00th=[39060], 99.50th=[42206], 99.90th=[46924], 99.95th=[88605], 00:18:00.176 | 99.99th=[88605] 00:18:00.176 bw ( KiB/s): min=16040, max=20824, per=21.19%, avg=18432.00, stdev=3382.80, samples=2 00:18:00.176 iops : min= 4010, max= 5206, avg=4608.00, stdev=845.70, samples=2 00:18:00.176 lat (usec) : 500=0.01% 00:18:00.176 lat (msec) : 2=0.17%, 4=0.47%, 10=43.22%, 20=44.96%, 50=11.13% 00:18:00.176 lat (msec) : 100=0.03% 00:18:00.176 cpu : usr=3.59%, sys=4.19%, ctx=424, majf=0, minf=1 00:18:00.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:00.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.176 issued rwts: total=4608,4686,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.176 job2: (groupid=0, jobs=1): err= 0: pid=1413438: Fri Jul 12 19:14:05 2024 00:18:00.176 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:18:00.176 slat (nsec): min=871, max=7424.6k, avg=89617.92, stdev=539996.26 00:18:00.176 clat (usec): min=4746, max=24153, avg=11321.07, stdev=2859.90 00:18:00.176 lat (usec): min=4753, max=24161, avg=11410.69, stdev=2899.94 00:18:00.176 clat percentiles (usec): 00:18:00.177 | 1.00th=[ 6194], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[ 8979], 00:18:00.177 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11600], 00:18:00.177 | 70.00th=[12387], 80.00th=[13698], 90.00th=[15270], 95.00th=[16188], 00:18:00.177 | 99.00th=[19006], 99.50th=[21890], 99.90th=[21890], 99.95th=[21890], 00:18:00.177 | 99.99th=[24249] 00:18:00.177 write: IOPS=5863, BW=22.9MiB/s (24.0MB/s)(23.0MiB/1004msec); 0 zone resets 00:18:00.177 slat (nsec): min=1523, max=5899.0k, avg=79113.49, stdev=418062.12 00:18:00.177 clat (usec): min=1155, max=26670, avg=10804.23, stdev=4336.63 00:18:00.177 lat (usec): min=1166, max=26679, avg=10883.35, stdev=4371.06 00:18:00.177 clat percentiles (usec): 00:18:00.177 | 1.00th=[ 4555], 5.00th=[ 6325], 10.00th=[ 7242], 20.00th=[ 8029], 00:18:00.177 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10159], 00:18:00.177 | 70.00th=[10945], 80.00th=[12649], 90.00th=[17433], 95.00th=[21627], 00:18:00.177 | 99.00th=[24249], 99.50th=[25297], 99.90th=[26608], 99.95th=[26608], 00:18:00.177 | 99.99th=[26608] 00:18:00.177 bw ( KiB/s): min=20792, max=25288, per=26.48%, avg=23040.00, stdev=3179.15, samples=2 00:18:00.177 iops : min= 5198, max= 6322, avg=5760.00, stdev=794.79, samples=2 00:18:00.177 lat (msec) : 2=0.12%, 4=0.29%, 10=48.84%, 20=46.73%, 50=4.02% 00:18:00.177 cpu : usr=4.69%, sys=4.39%, ctx=588, majf=0, minf=1 00:18:00.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:00.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.177 issued rwts: total=5632,5887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.177 job3: (groupid=0, jobs=1): err= 0: pid=1413445: Fri Jul 12 19:14:05 2024 00:18:00.177 read: IOPS=3341, BW=13.1MiB/s (13.7MB/s)(13.1MiB/1002msec) 00:18:00.177 slat (nsec): min=935, max=16354k, avg=154631.67, stdev=940991.98 00:18:00.177 clat (usec): min=1195, max=48616, avg=19775.22, stdev=7648.78 00:18:00.177 lat (usec): min=4704, max=48623, avg=19929.85, stdev=7668.56 00:18:00.177 clat percentiles (usec): 00:18:00.177 | 1.00th=[ 5473], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[12518], 00:18:00.177 | 30.00th=[14615], 40.00th=[17171], 50.00th=[19792], 60.00th=[22152], 00:18:00.177 | 70.00th=[23462], 80.00th=[24773], 90.00th=[28967], 95.00th=[33424], 00:18:00.177 | 99.00th=[42730], 99.50th=[42730], 99.90th=[48497], 99.95th=[48497], 00:18:00.177 | 99.99th=[48497] 00:18:00.177 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:18:00.177 slat (nsec): min=1662, max=33651k, avg=128319.96, stdev=1020731.95 00:18:00.177 clat (usec): min=6001, max=44766, avg=16273.39, stdev=7761.97 00:18:00.177 lat (usec): min=6011, max=44775, avg=16401.71, stdev=7785.82 00:18:00.177 clat percentiles (usec): 00:18:00.177 | 1.00th=[ 6980], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10552], 00:18:00.177 | 30.00th=[11207], 40.00th=[12518], 50.00th=[14484], 60.00th=[15926], 00:18:00.177 | 70.00th=[17957], 80.00th=[20579], 90.00th=[24511], 95.00th=[39584], 00:18:00.177 | 99.00th=[41681], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:18:00.177 | 99.99th=[44827] 00:18:00.177 bw ( KiB/s): min=12288, max=16384, per=16.48%, avg=14336.00, stdev=2896.31, samples=2 00:18:00.177 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:18:00.177 lat (msec) : 2=0.01%, 10=11.97%, 20=51.83%, 50=36.18% 00:18:00.177 cpu : usr=2.80%, sys=4.00%, ctx=268, majf=0, minf=1 00:18:00.177 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:00.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:00.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:00.177 issued rwts: total=3348,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:00.177 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:00.177 00:18:00.177 Run status group 0 (all jobs): 00:18:00.177 READ: bw=82.2MiB/s (86.2MB/s), 13.1MiB/s-29.4MiB/s (13.7MB/s-30.8MB/s), io=82.6MiB (86.6MB), run=1002-1004msec 00:18:00.177 WRITE: bw=85.0MiB/s (89.1MB/s), 14.0MiB/s-29.9MiB/s (14.7MB/s-31.3MB/s), io=85.3MiB (89.4MB), run=1002-1004msec 00:18:00.177 00:18:00.177 Disk stats (read/write): 00:18:00.177 nvme0n1: ios=6194/6506, merge=0/0, ticks=40567/43882, in_queue=84449, util=92.48% 00:18:00.177 nvme0n2: ios=3521/3584, merge=0/0, ticks=23704/21166, in_queue=44870, util=99.80% 00:18:00.177 nvme0n3: ios=4629/4911, merge=0/0, ticks=25711/25360, in_queue=51071, util=88.84% 00:18:00.177 nvme0n4: ios=2604/2939, merge=0/0, ticks=15788/11275, in_queue=27063, util=99.89% 00:18:00.177 19:14:05 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:00.177 [global] 00:18:00.177 thread=1 00:18:00.177 invalidate=1 00:18:00.177 rw=randwrite 00:18:00.177 time_based=1 00:18:00.177 runtime=1 00:18:00.177 ioengine=libaio 00:18:00.177 direct=1 00:18:00.177 bs=4096 00:18:00.177 iodepth=128 00:18:00.177 norandommap=0 00:18:00.177 numjobs=1 00:18:00.177 00:18:00.177 verify_dump=1 00:18:00.177 verify_backlog=512 00:18:00.177 verify_state_save=0 00:18:00.177 do_verify=1 00:18:00.177 verify=crc32c-intel 00:18:00.177 [job0] 00:18:00.177 filename=/dev/nvme0n1 00:18:00.177 [job1] 00:18:00.177 filename=/dev/nvme0n2 00:18:00.177 [job2] 00:18:00.177 filename=/dev/nvme0n3 00:18:00.177 [job3] 00:18:00.177 filename=/dev/nvme0n4 00:18:00.177 Could not set queue depth (nvme0n1) 00:18:00.177 Could not set queue depth (nvme0n2) 00:18:00.177 Could not set queue depth (nvme0n3) 00:18:00.177 Could not set queue depth (nvme0n4) 00:18:00.438 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.438 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.438 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.438 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:00.438 fio-3.35 00:18:00.438 Starting 4 threads 00:18:01.846 00:18:01.846 job0: (groupid=0, jobs=1): err= 0: pid=1413890: Fri Jul 12 19:14:07 2024 00:18:01.846 read: IOPS=5304, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1003msec) 00:18:01.846 slat (nsec): min=862, max=26140k, avg=99293.07, stdev=761175.38 00:18:01.846 clat (usec): min=2168, max=79127, avg=13069.15, stdev=9190.08 00:18:01.846 lat (usec): min=2173, max=79150, avg=13168.44, stdev=9270.04 00:18:01.846 clat percentiles (usec): 00:18:01.846 | 1.00th=[ 5145], 5.00th=[ 7635], 10.00th=[ 7963], 20.00th=[ 8455], 00:18:01.846 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[10159], 60.00th=[10945], 00:18:01.846 | 70.00th=[13435], 80.00th=[15401], 90.00th=[17433], 95.00th=[29492], 00:18:01.846 | 99.00th=[57410], 99.50th=[57410], 99.90th=[57410], 99.95th=[69731], 00:18:01.846 | 99.99th=[79168] 00:18:01.846 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:18:01.846 slat (nsec): min=1444, max=13595k, avg=79210.33, stdev=517440.37 00:18:01.846 clat (usec): min=3592, max=42792, avg=10177.83, stdev=3993.57 00:18:01.846 lat (usec): min=3616, max=42822, avg=10257.04, stdev=4039.58 00:18:01.846 clat percentiles (usec): 00:18:01.846 | 1.00th=[ 6718], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8160], 00:18:01.846 | 30.00th=[ 8455], 40.00th=[ 8979], 50.00th=[ 9241], 60.00th=[ 9634], 00:18:01.846 | 70.00th=[10159], 80.00th=[11076], 90.00th=[12649], 95.00th=[14484], 00:18:01.846 | 99.00th=[30540], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:18:01.846 | 99.99th=[42730] 00:18:01.846 bw ( KiB/s): min=19552, max=25504, per=23.37%, avg=22528.00, stdev=4208.70, samples=2 00:18:01.846 iops : min= 4888, max= 6376, avg=5632.00, stdev=1052.17, samples=2 00:18:01.846 lat (msec) : 4=0.31%, 10=58.44%, 20=37.05%, 50=2.66%, 100=1.54% 00:18:01.846 cpu : usr=3.99%, sys=4.19%, ctx=394, majf=0, minf=1 00:18:01.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:18:01.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.846 issued rwts: total=5320,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.846 job1: (groupid=0, jobs=1): err= 0: pid=1413899: Fri Jul 12 19:14:07 2024 00:18:01.846 read: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec) 00:18:01.846 slat (nsec): min=860, max=5908.4k, avg=68084.12, stdev=405638.90 00:18:01.846 clat (usec): min=2559, max=21954, avg=8903.24, stdev=2905.97 00:18:01.846 lat (usec): min=2561, max=21962, avg=8971.32, stdev=2937.32 00:18:01.846 clat percentiles (usec): 00:18:01.846 | 1.00th=[ 4228], 5.00th=[ 5145], 10.00th=[ 6063], 20.00th=[ 6652], 00:18:01.846 | 30.00th=[ 6980], 40.00th=[ 7635], 50.00th=[ 8094], 60.00th=[ 8848], 00:18:01.846 | 70.00th=[ 9765], 80.00th=[11076], 90.00th=[13304], 95.00th=[15008], 00:18:01.846 | 99.00th=[17171], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:18:01.846 | 99.99th=[21890] 00:18:01.846 write: IOPS=7274, BW=28.4MiB/s (29.8MB/s)(28.6MiB/1006msec); 0 zone resets 00:18:01.846 slat (nsec): min=1465, max=5446.7k, avg=65320.62, stdev=336386.56 00:18:01.846 clat (usec): min=1585, max=21184, avg=8669.18, stdev=3415.58 00:18:01.846 lat (usec): min=1589, max=21208, avg=8734.50, stdev=3443.50 00:18:01.846 clat percentiles (usec): 00:18:01.846 | 1.00th=[ 3195], 5.00th=[ 4752], 10.00th=[ 5145], 20.00th=[ 6194], 00:18:01.846 | 30.00th=[ 6652], 40.00th=[ 7111], 50.00th=[ 7701], 60.00th=[ 8455], 00:18:01.846 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[14484], 95.00th=[16581], 00:18:01.846 | 99.00th=[18220], 99.50th=[18744], 99.90th=[19530], 99.95th=[19530], 00:18:01.846 | 99.99th=[21103] 00:18:01.846 bw ( KiB/s): min=24576, max=32952, per=29.84%, avg=28764.00, stdev=5922.73, samples=2 00:18:01.846 iops : min= 6144, max= 8238, avg=7191.00, stdev=1480.68, samples=2 00:18:01.846 lat (msec) : 2=0.10%, 4=1.49%, 10=72.95%, 20=25.45%, 50=0.01% 00:18:01.846 cpu : usr=4.18%, sys=6.77%, ctx=753, majf=0, minf=1 00:18:01.846 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:01.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.846 issued rwts: total=7168,7318,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.846 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.846 job2: (groupid=0, jobs=1): err= 0: pid=1413917: Fri Jul 12 19:14:07 2024 00:18:01.846 read: IOPS=6101, BW=23.8MiB/s (25.0MB/s)(24.0MiB/1007msec) 00:18:01.846 slat (nsec): min=893, max=13309k, avg=83446.15, stdev=652830.88 00:18:01.846 clat (usec): min=2103, max=49583, avg=10903.74, stdev=6816.07 00:18:01.846 lat (usec): min=2109, max=49608, avg=10987.18, stdev=6887.58 00:18:01.846 clat percentiles (usec): 00:18:01.847 | 1.00th=[ 2671], 5.00th=[ 5342], 10.00th=[ 6390], 20.00th=[ 7177], 00:18:01.847 | 30.00th=[ 7570], 40.00th=[ 7963], 50.00th=[ 8356], 60.00th=[ 9110], 00:18:01.847 | 70.00th=[10421], 80.00th=[11863], 90.00th=[20317], 95.00th=[26084], 00:18:01.847 | 99.00th=[38011], 99.50th=[38536], 99.90th=[45876], 99.95th=[45876], 00:18:01.847 | 99.99th=[49546] 00:18:01.847 write: IOPS=6150, BW=24.0MiB/s (25.2MB/s)(24.2MiB/1007msec); 0 zone resets 00:18:01.847 slat (nsec): min=1536, max=9015.2k, avg=65706.59, stdev=469455.52 00:18:01.847 clat (usec): min=918, max=45829, avg=9804.12, stdev=5681.52 00:18:01.847 lat (usec): min=1276, max=45837, avg=9869.83, stdev=5712.51 00:18:01.847 clat percentiles (usec): 00:18:01.847 | 1.00th=[ 1729], 5.00th=[ 4228], 10.00th=[ 4686], 20.00th=[ 5866], 00:18:01.847 | 30.00th=[ 6718], 40.00th=[ 7177], 50.00th=[ 8225], 60.00th=[ 9372], 00:18:01.847 | 70.00th=[10421], 80.00th=[13304], 90.00th=[17695], 95.00th=[21365], 00:18:01.847 | 99.00th=[32637], 99.50th=[35390], 99.90th=[38011], 99.95th=[38011], 00:18:01.847 | 99.99th=[45876] 00:18:01.847 bw ( KiB/s): min=17192, max=31960, per=25.50%, avg=24576.00, stdev=10442.55, samples=2 00:18:01.847 iops : min= 4298, max= 7990, avg=6144.00, stdev=2610.64, samples=2 00:18:01.847 lat (usec) : 1000=0.01% 00:18:01.847 lat (msec) : 2=0.56%, 4=3.10%, 10=62.87%, 20=24.75%, 50=8.71% 00:18:01.847 cpu : usr=5.07%, sys=6.16%, ctx=390, majf=0, minf=1 00:18:01.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:18:01.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.847 issued rwts: total=6144,6194,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.847 job3: (groupid=0, jobs=1): err= 0: pid=1413924: Fri Jul 12 19:14:07 2024 00:18:01.847 read: IOPS=4951, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1006msec) 00:18:01.847 slat (nsec): min=922, max=9788.9k, avg=98269.67, stdev=610637.21 00:18:01.847 clat (usec): min=3266, max=32718, avg=12703.09, stdev=3755.30 00:18:01.847 lat (usec): min=4094, max=32747, avg=12801.36, stdev=3791.19 00:18:01.847 clat percentiles (usec): 00:18:01.847 | 1.00th=[ 5211], 5.00th=[ 6652], 10.00th=[ 8848], 20.00th=[10814], 00:18:01.847 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12125], 60.00th=[12649], 00:18:01.847 | 70.00th=[13173], 80.00th=[14877], 90.00th=[17171], 95.00th=[18744], 00:18:01.847 | 99.00th=[27657], 99.50th=[27919], 99.90th=[28443], 99.95th=[31589], 00:18:01.847 | 99.99th=[32637] 00:18:01.847 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:18:01.847 slat (nsec): min=1546, max=7563.1k, avg=94446.99, stdev=520731.21 00:18:01.847 clat (usec): min=3733, max=34222, avg=12445.38, stdev=4393.22 00:18:01.847 lat (usec): min=4131, max=34230, avg=12539.83, stdev=4415.26 00:18:01.847 clat percentiles (usec): 00:18:01.847 | 1.00th=[ 5145], 5.00th=[ 6980], 10.00th=[ 8225], 20.00th=[ 9765], 00:18:01.847 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11338], 60.00th=[11731], 00:18:01.847 | 70.00th=[12780], 80.00th=[14353], 90.00th=[18482], 95.00th=[21365], 00:18:01.847 | 99.00th=[27132], 99.50th=[29492], 99.90th=[34341], 99.95th=[34341], 00:18:01.847 | 99.99th=[34341] 00:18:01.847 bw ( KiB/s): min=16384, max=24576, per=21.25%, avg=20480.00, stdev=5792.62, samples=2 00:18:01.847 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:18:01.847 lat (msec) : 4=0.02%, 10=18.22%, 20=75.74%, 50=6.02% 00:18:01.847 cpu : usr=2.79%, sys=5.77%, ctx=455, majf=0, minf=1 00:18:01.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:01.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:01.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:01.847 issued rwts: total=4981,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:01.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:01.847 00:18:01.847 Run status group 0 (all jobs): 00:18:01.847 READ: bw=91.6MiB/s (96.0MB/s), 19.3MiB/s-27.8MiB/s (20.3MB/s-29.2MB/s), io=92.2MiB (96.7MB), run=1003-1007msec 00:18:01.847 WRITE: bw=94.1MiB/s (98.7MB/s), 19.9MiB/s-28.4MiB/s (20.8MB/s-29.8MB/s), io=94.8MiB (99.4MB), run=1003-1007msec 00:18:01.847 00:18:01.847 Disk stats (read/write): 00:18:01.847 nvme0n1: ios=4146/4500, merge=0/0, ticks=19448/15983, in_queue=35431, util=87.17% 00:18:01.847 nvme0n2: ios=5663/5733, merge=0/0, ticks=26101/23092, in_queue=49193, util=97.04% 00:18:01.847 nvme0n3: ios=5215/5632, merge=0/0, ticks=36345/37419, in_queue=73764, util=88.51% 00:18:01.847 nvme0n4: ios=4183/4608, merge=0/0, ticks=20980/21541, in_queue=42521, util=99.79% 00:18:01.847 19:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:01.847 19:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1414009 00:18:01.847 19:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:01.847 19:14:07 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:01.847 [global] 00:18:01.847 thread=1 00:18:01.847 invalidate=1 00:18:01.847 rw=read 00:18:01.847 time_based=1 00:18:01.847 runtime=10 00:18:01.847 ioengine=libaio 00:18:01.847 direct=1 00:18:01.847 bs=4096 00:18:01.847 iodepth=1 00:18:01.847 norandommap=1 00:18:01.847 numjobs=1 00:18:01.847 00:18:01.847 [job0] 00:18:01.847 filename=/dev/nvme0n1 00:18:01.847 [job1] 00:18:01.847 filename=/dev/nvme0n2 00:18:01.847 [job2] 00:18:01.847 filename=/dev/nvme0n3 00:18:01.847 [job3] 00:18:01.847 filename=/dev/nvme0n4 00:18:01.847 Could not set queue depth (nvme0n1) 00:18:01.847 Could not set queue depth (nvme0n2) 00:18:01.847 Could not set queue depth (nvme0n3) 00:18:01.847 Could not set queue depth (nvme0n4) 00:18:02.107 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:02.107 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:02.107 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:02.107 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:02.107 fio-3.35 00:18:02.107 Starting 4 threads 00:18:04.651 19:14:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:04.651 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=2580480, buflen=4096 00:18:04.651 fio: pid=1414377, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:04.651 19:14:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:04.912 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=12419072, buflen=4096 00:18:04.912 fio: pid=1414370, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:04.912 19:14:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:04.912 19:14:10 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:05.173 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.173 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:05.173 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=286720, buflen=4096 00:18:05.173 fio: pid=1414352, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:05.173 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.173 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:05.173 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=9285632, buflen=4096 00:18:05.173 fio: pid=1414358, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:05.173 00:18:05.173 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1414352: Fri Jul 12 19:14:11 2024 00:18:05.173 read: IOPS=24, BW=95.6KiB/s (97.9kB/s)(280KiB/2929msec) 00:18:05.173 slat (usec): min=26, max=7644, avg=134.46, stdev=903.97 00:18:05.173 clat (usec): min=1338, max=45050, avg=41403.60, stdev=4877.47 00:18:05.173 lat (usec): min=1370, max=49053, avg=41539.59, stdev=4961.23 00:18:05.173 clat percentiles (usec): 00:18:05.173 | 1.00th=[ 1336], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:05.173 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:05.173 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:05.173 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:18:05.173 | 99.99th=[44827] 00:18:05.173 bw ( KiB/s): min= 96, max= 96, per=1.23%, avg=96.00, stdev= 0.00, samples=5 00:18:05.173 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:18:05.173 lat (msec) : 2=1.41%, 50=97.18% 00:18:05.173 cpu : usr=0.00%, sys=0.14%, ctx=74, majf=0, minf=1 00:18:05.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.173 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.173 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.173 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1414358: Fri Jul 12 19:14:11 2024 00:18:05.173 read: IOPS=735, BW=2941KiB/s (3012kB/s)(9068KiB/3083msec) 00:18:05.173 slat (usec): min=6, max=16483, avg=31.70, stdev=345.81 00:18:05.173 clat (usec): min=430, max=42095, avg=1312.48, stdev=3317.92 00:18:05.173 lat (usec): min=455, max=57973, avg=1344.18, stdev=3425.28 00:18:05.173 clat percentiles (usec): 00:18:05.173 | 1.00th=[ 562], 5.00th=[ 668], 10.00th=[ 734], 20.00th=[ 832], 00:18:05.173 | 30.00th=[ 955], 40.00th=[ 1057], 50.00th=[ 1123], 60.00th=[ 1156], 00:18:05.173 | 70.00th=[ 1188], 80.00th=[ 1205], 90.00th=[ 1237], 95.00th=[ 1270], 00:18:05.173 | 99.00th=[ 1319], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:18:05.173 | 99.99th=[42206] 00:18:05.173 bw ( KiB/s): min= 3304, max= 4216, per=46.33%, avg=3606.40, stdev=412.58, samples=5 00:18:05.173 iops : min= 826, max= 1054, avg=901.60, stdev=103.14, samples=5 00:18:05.173 lat (usec) : 500=0.22%, 750=11.64%, 1000=21.12% 00:18:05.173 lat (msec) : 2=66.31%, 50=0.66% 00:18:05.173 cpu : usr=0.55%, sys=2.34%, ctx=2272, majf=0, minf=1 00:18:05.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.173 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.173 issued rwts: total=2268,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.173 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1414370: Fri Jul 12 19:14:11 2024 00:18:05.173 read: IOPS=1109, BW=4436KiB/s (4542kB/s)(11.8MiB/2734msec) 00:18:05.173 slat (usec): min=6, max=12637, avg=30.56, stdev=300.57 00:18:05.173 clat (usec): min=230, max=1602, avg=858.17, stdev=224.67 00:18:05.173 lat (usec): min=237, max=13895, avg=888.73, stdev=384.01 00:18:05.173 clat percentiles (usec): 00:18:05.173 | 1.00th=[ 482], 5.00th=[ 586], 10.00th=[ 635], 20.00th=[ 693], 00:18:05.173 | 30.00th=[ 734], 40.00th=[ 775], 50.00th=[ 816], 60.00th=[ 840], 00:18:05.173 | 70.00th=[ 865], 80.00th=[ 930], 90.00th=[ 1287], 95.00th=[ 1319], 00:18:05.173 | 99.00th=[ 1385], 99.50th=[ 1385], 99.90th=[ 1450], 99.95th=[ 1549], 00:18:05.173 | 99.99th=[ 1598] 00:18:05.173 bw ( KiB/s): min= 3048, max= 5080, per=59.38%, avg=4622.40, stdev=882.20, samples=5 00:18:05.173 iops : min= 762, max= 1270, avg=1155.60, stdev=220.55, samples=5 00:18:05.173 lat (usec) : 250=0.03%, 500=1.52%, 750=33.70%, 1000=45.86% 00:18:05.173 lat (msec) : 2=18.86% 00:18:05.173 cpu : usr=1.17%, sys=2.93%, ctx=3036, majf=0, minf=1 00:18:05.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.173 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.173 issued rwts: total=3033,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.173 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1414377: Fri Jul 12 19:14:11 2024 00:18:05.173 read: IOPS=245, BW=980KiB/s (1004kB/s)(2520KiB/2571msec) 00:18:05.173 slat (nsec): min=21039, max=62285, avg=24975.68, stdev=3456.17 00:18:05.173 clat (usec): min=591, max=42073, avg=4015.91, stdev=10661.37 00:18:05.173 lat (usec): min=617, max=42098, avg=4040.88, stdev=10661.17 00:18:05.173 clat percentiles (usec): 00:18:05.173 | 1.00th=[ 742], 5.00th=[ 832], 10.00th=[ 906], 20.00th=[ 963], 00:18:05.174 | 30.00th=[ 996], 40.00th=[ 1029], 50.00th=[ 1057], 60.00th=[ 1074], 00:18:05.174 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1188], 95.00th=[42206], 00:18:05.174 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:05.174 | 99.99th=[42206] 00:18:05.174 bw ( KiB/s): min= 96, max= 3768, per=12.00%, avg=934.40, stdev=1599.95, samples=5 00:18:05.174 iops : min= 24, max= 942, avg=233.60, stdev=399.99, samples=5 00:18:05.174 lat (usec) : 750=1.27%, 1000=29.79% 00:18:05.174 lat (msec) : 2=61.49%, 50=7.29% 00:18:05.174 cpu : usr=0.19%, sys=0.78%, ctx=632, majf=0, minf=2 00:18:05.174 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:05.174 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.174 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:05.174 issued rwts: total=631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:05.174 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:05.174 00:18:05.174 Run status group 0 (all jobs): 00:18:05.174 READ: bw=7783KiB/s (7970kB/s), 95.6KiB/s-4436KiB/s (97.9kB/s-4542kB/s), io=23.4MiB (24.6MB), run=2571-3083msec 00:18:05.174 00:18:05.174 Disk stats (read/write): 00:18:05.174 nvme0n1: ios=97/0, merge=0/0, ticks=3605/0, in_queue=3605, util=98.93% 00:18:05.174 nvme0n2: ios=2261/0, merge=0/0, ticks=2639/0, in_queue=2639, util=94.76% 00:18:05.174 nvme0n3: ios=2954/0, merge=0/0, ticks=2431/0, in_queue=2431, util=96.03% 00:18:05.174 nvme0n4: ios=427/0, merge=0/0, ticks=2315/0, in_queue=2315, util=96.06% 00:18:05.434 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.434 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:05.695 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.695 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:05.695 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.695 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:05.956 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:05.956 19:14:11 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1414009 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:06.218 nvmf hotplug test: fio failed as expected 00:18:06.218 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.479 rmmod nvme_tcp 00:18:06.479 rmmod nvme_fabrics 00:18:06.479 rmmod nvme_keyring 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1410504 ']' 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1410504 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1410504 ']' 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1410504 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.479 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1410504 00:18:06.480 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:06.480 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:06.480 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1410504' 00:18:06.480 killing process with pid 1410504 00:18:06.480 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1410504 00:18:06.480 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1410504 00:18:06.740 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.740 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.740 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.740 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.740 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.740 19:14:12 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.740 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.740 19:14:12 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.651 19:14:14 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.651 00:18:08.651 real 0m28.308s 00:18:08.651 user 2m33.691s 00:18:08.651 sys 0m9.076s 00:18:08.651 19:14:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.651 19:14:14 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.651 ************************************ 00:18:08.651 END TEST nvmf_fio_target 00:18:08.651 ************************************ 00:18:08.651 19:14:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:08.651 19:14:14 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:08.651 19:14:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:08.652 19:14:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.652 19:14:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.913 ************************************ 00:18:08.913 START TEST nvmf_bdevio 00:18:08.913 ************************************ 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:08.913 * Looking for test storage... 00:18:08.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.913 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.914 19:14:14 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:15.501 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:15.502 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:15.502 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:15.502 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:15.502 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:15.502 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:15.763 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:15.763 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:15.763 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:15.763 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:15.763 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:15.763 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:16.025 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:16.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:16.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.410 ms 00:18:16.025 00:18:16.025 --- 10.0.0.2 ping statistics --- 00:18:16.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.025 rtt min/avg/max/mdev = 0.410/0.410/0.410/0.000 ms 00:18:16.025 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:16.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:16.025 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:18:16.025 00:18:16.025 --- 10.0.0.1 ping statistics --- 00:18:16.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:16.025 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:18:16.025 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:16.025 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:16.025 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:16.025 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1419374 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1419374 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1419374 ']' 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:16.026 19:14:21 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.026 [2024-07-12 19:14:22.032027] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:18:16.026 [2024-07-12 19:14:22.032138] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.026 EAL: No free 2048 kB hugepages reported on node 1 00:18:16.026 [2024-07-12 19:14:22.126307] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.288 [2024-07-12 19:14:22.222996] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.288 [2024-07-12 19:14:22.223056] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.288 [2024-07-12 19:14:22.223064] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.288 [2024-07-12 19:14:22.223070] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.288 [2024-07-12 19:14:22.223077] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.288 [2024-07-12 19:14:22.223241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:16.288 [2024-07-12 19:14:22.223395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:16.288 [2024-07-12 19:14:22.223553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.288 [2024-07-12 19:14:22.223553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.864 [2024-07-12 19:14:22.872335] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.864 Malloc0 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:16.864 [2024-07-12 19:14:22.938011] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:16.864 { 00:18:16.864 "params": { 00:18:16.864 "name": "Nvme$subsystem", 00:18:16.864 "trtype": "$TEST_TRANSPORT", 00:18:16.864 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:16.864 "adrfam": "ipv4", 00:18:16.864 "trsvcid": "$NVMF_PORT", 00:18:16.864 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:16.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:16.864 "hdgst": ${hdgst:-false}, 00:18:16.864 "ddgst": ${ddgst:-false} 00:18:16.864 }, 00:18:16.864 "method": "bdev_nvme_attach_controller" 00:18:16.864 } 00:18:16.864 EOF 00:18:16.864 )") 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:16.864 19:14:22 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:16.864 "params": { 00:18:16.864 "name": "Nvme1", 00:18:16.864 "trtype": "tcp", 00:18:16.864 "traddr": "10.0.0.2", 00:18:16.864 "adrfam": "ipv4", 00:18:16.864 "trsvcid": "4420", 00:18:16.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:16.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:16.864 "hdgst": false, 00:18:16.864 "ddgst": false 00:18:16.864 }, 00:18:16.864 "method": "bdev_nvme_attach_controller" 00:18:16.864 }' 00:18:17.125 [2024-07-12 19:14:22.995386] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:18:17.125 [2024-07-12 19:14:22.995453] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1419556 ] 00:18:17.125 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.125 [2024-07-12 19:14:23.059630] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:17.125 [2024-07-12 19:14:23.135346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.125 [2024-07-12 19:14:23.135525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.125 [2024-07-12 19:14:23.135529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.385 I/O targets: 00:18:17.385 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:17.385 00:18:17.385 00:18:17.385 CUnit - A unit testing framework for C - Version 2.1-3 00:18:17.385 http://cunit.sourceforge.net/ 00:18:17.385 00:18:17.385 00:18:17.385 Suite: bdevio tests on: Nvme1n1 00:18:17.385 Test: blockdev write read block ...passed 00:18:17.646 Test: blockdev write zeroes read block ...passed 00:18:17.646 Test: blockdev write zeroes read no split ...passed 00:18:17.646 Test: blockdev write zeroes read split ...passed 00:18:17.646 Test: blockdev write zeroes read split partial ...passed 00:18:17.646 Test: blockdev reset ...[2024-07-12 19:14:23.570258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:17.646 [2024-07-12 19:14:23.570319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xba3ce0 (9): Bad file descriptor 00:18:17.646 [2024-07-12 19:14:23.588111] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:17.646 passed 00:18:17.646 Test: blockdev write read 8 blocks ...passed 00:18:17.646 Test: blockdev write read size > 128k ...passed 00:18:17.646 Test: blockdev write read invalid size ...passed 00:18:17.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:17.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:17.646 Test: blockdev write read max offset ...passed 00:18:17.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:17.907 Test: blockdev writev readv 8 blocks ...passed 00:18:17.907 Test: blockdev writev readv 30 x 1block ...passed 00:18:17.907 Test: blockdev writev readv block ...passed 00:18:17.907 Test: blockdev writev readv size > 128k ...passed 00:18:17.907 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:17.907 Test: blockdev comparev and writev ...[2024-07-12 19:14:23.856877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.907 [2024-07-12 19:14:23.856903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.856914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.907 [2024-07-12 19:14:23.856920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.857362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.907 [2024-07-12 19:14:23.857371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.857381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.907 [2024-07-12 19:14:23.857387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.857796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.907 [2024-07-12 19:14:23.857804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.857814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.907 [2024-07-12 19:14:23.857819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.858256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.907 [2024-07-12 19:14:23.858266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.858275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:17.907 [2024-07-12 19:14:23.858281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.907 passed 00:18:17.907 Test: blockdev nvme passthru rw ...passed 00:18:17.907 Test: blockdev nvme passthru vendor specific ...[2024-07-12 19:14:23.942825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.907 [2024-07-12 19:14:23.942836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.943120] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.907 [2024-07-12 19:14:23.943131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.943402] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.907 [2024-07-12 19:14:23.943410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.907 [2024-07-12 19:14:23.943697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:17.907 [2024-07-12 19:14:23.943704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.907 passed 00:18:17.907 Test: blockdev nvme admin passthru ...passed 00:18:17.907 Test: blockdev copy ...passed 00:18:17.907 00:18:17.907 Run Summary: Type Total Ran Passed Failed Inactive 00:18:17.907 suites 1 1 n/a 0 0 00:18:17.907 tests 23 23 23 0 0 00:18:17.907 asserts 152 152 152 0 n/a 00:18:17.907 00:18:17.907 Elapsed time = 1.136 seconds 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:18.168 rmmod nvme_tcp 00:18:18.168 rmmod nvme_fabrics 00:18:18.168 rmmod nvme_keyring 00:18:18.168 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1419374 ']' 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1419374 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1419374 ']' 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1419374 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1419374 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1419374' 00:18:18.169 killing process with pid 1419374 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1419374 00:18:18.169 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1419374 00:18:18.430 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:18.430 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:18.430 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:18.430 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.430 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:18.430 19:14:24 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.430 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.430 19:14:24 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.343 19:14:26 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.604 00:18:20.604 real 0m11.687s 00:18:20.604 user 0m13.090s 00:18:20.604 sys 0m5.776s 00:18:20.604 19:14:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:20.604 19:14:26 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:20.604 ************************************ 00:18:20.604 END TEST nvmf_bdevio 00:18:20.604 ************************************ 00:18:20.604 19:14:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:20.604 19:14:26 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:20.604 19:14:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:20.604 19:14:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:20.604 19:14:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:20.604 ************************************ 00:18:20.604 START TEST nvmf_auth_target 00:18:20.604 ************************************ 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:20.604 * Looking for test storage... 00:18:20.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.604 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.605 19:14:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.747 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:18:28.748 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:18:28.748 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:18:28.748 Found net devices under 0000:4b:00.0: cvl_0_0 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:18:28.748 Found net devices under 0000:4b:00.1: cvl_0_1 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:28.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:18:28.748 00:18:28.748 --- 10.0.0.2 ping statistics --- 00:18:28.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.748 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:18:28.748 00:18:28.748 --- 10.0.0.1 ping statistics --- 00:18:28.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.748 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:28.748 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1423886 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1423886 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1423886 ']' 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:28.749 19:14:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1424233 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=f0f6b61e2ee227600da2cb79e25e5edaf112954bb70b6dc1 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.pVo 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key f0f6b61e2ee227600da2cb79e25e5edaf112954bb70b6dc1 0 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 f0f6b61e2ee227600da2cb79e25e5edaf112954bb70b6dc1 0 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=f0f6b61e2ee227600da2cb79e25e5edaf112954bb70b6dc1 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:28.749 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.pVo 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.pVo 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.pVo 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a704c8430a93384e9820d66e6b18d38696008eef5c160fd3d7f9eadbb94c2ad6 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.JnI 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a704c8430a93384e9820d66e6b18d38696008eef5c160fd3d7f9eadbb94c2ad6 3 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a704c8430a93384e9820d66e6b18d38696008eef5c160fd3d7f9eadbb94c2ad6 3 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a704c8430a93384e9820d66e6b18d38696008eef5c160fd3d7f9eadbb94c2ad6 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.JnI 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.JnI 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.JnI 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=af0969125b3a7725abcce7d5561ce21e 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VOk 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key af0969125b3a7725abcce7d5561ce21e 1 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 af0969125b3a7725abcce7d5561ce21e 1 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=af0969125b3a7725abcce7d5561ce21e 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:29.010 19:14:34 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VOk 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VOk 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.VOk 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0efc3193645e09426f0de01d403feeadc614b531b5006cc0 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.9qZ 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0efc3193645e09426f0de01d403feeadc614b531b5006cc0 2 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0efc3193645e09426f0de01d403feeadc614b531b5006cc0 2 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0efc3193645e09426f0de01d403feeadc614b531b5006cc0 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.9qZ 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.9qZ 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.9qZ 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=4f306ce872d99d774509a03c6b1a2576fc562717265006f0 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ekk 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 4f306ce872d99d774509a03c6b1a2576fc562717265006f0 2 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 4f306ce872d99d774509a03c6b1a2576fc562717265006f0 2 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=4f306ce872d99d774509a03c6b1a2576fc562717265006f0 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:29.010 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ekk 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ekk 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.ekk 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=17bb150a5a7713887c8ed7c73715fcd2 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.C2p 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 17bb150a5a7713887c8ed7c73715fcd2 1 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 17bb150a5a7713887c8ed7c73715fcd2 1 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=17bb150a5a7713887c8ed7c73715fcd2 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.C2p 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.C2p 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.C2p 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0795da53a04c1be7348c06a4c9a9790cccabc71458768440b9753ac68148d247 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Odv 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0795da53a04c1be7348c06a4c9a9790cccabc71458768440b9753ac68148d247 3 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0795da53a04c1be7348c06a4c9a9790cccabc71458768440b9753ac68148d247 3 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0795da53a04c1be7348c06a4c9a9790cccabc71458768440b9753ac68148d247 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Odv 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Odv 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Odv 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1423886 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1423886 ']' 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.271 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1424233 /var/tmp/host.sock 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1424233 ']' 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:29.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.pVo 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.pVo 00:18:29.531 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.pVo 00:18:29.791 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.JnI ]] 00:18:29.791 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JnI 00:18:29.791 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.791 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.791 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.791 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JnI 00:18:29.791 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.JnI 00:18:30.051 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:30.051 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.VOk 00:18:30.051 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.051 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.051 19:14:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.051 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.VOk 00:18:30.051 19:14:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.VOk 00:18:30.052 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.9qZ ]] 00:18:30.052 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9qZ 00:18:30.052 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.052 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.052 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.052 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9qZ 00:18:30.052 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.9qZ 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ekk 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ekk 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ekk 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.C2p ]] 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C2p 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.311 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C2p 00:18:30.312 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.C2p 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Odv 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Odv 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Odv 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:30.576 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:30.864 19:14:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:31.158 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.158 { 00:18:31.158 "cntlid": 1, 00:18:31.158 "qid": 0, 00:18:31.158 "state": "enabled", 00:18:31.158 "thread": "nvmf_tgt_poll_group_000", 00:18:31.158 "listen_address": { 00:18:31.158 "trtype": "TCP", 00:18:31.158 "adrfam": "IPv4", 00:18:31.158 "traddr": "10.0.0.2", 00:18:31.158 "trsvcid": "4420" 00:18:31.158 }, 00:18:31.158 "peer_address": { 00:18:31.158 "trtype": "TCP", 00:18:31.158 "adrfam": "IPv4", 00:18:31.158 "traddr": "10.0.0.1", 00:18:31.158 "trsvcid": "42748" 00:18:31.158 }, 00:18:31.158 "auth": { 00:18:31.158 "state": "completed", 00:18:31.158 "digest": "sha256", 00:18:31.158 "dhgroup": "null" 00:18:31.158 } 00:18:31.158 } 00:18:31.158 ]' 00:18:31.158 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.418 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.418 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.418 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:31.418 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.418 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.418 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.418 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.679 19:14:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:18:32.250 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.250 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:32.250 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.250 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.250 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.250 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.250 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:32.250 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.510 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.770 00:18:32.770 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.770 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.770 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.770 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.770 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.770 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.770 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.031 19:14:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.031 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.031 { 00:18:33.031 "cntlid": 3, 00:18:33.031 "qid": 0, 00:18:33.031 "state": "enabled", 00:18:33.031 "thread": "nvmf_tgt_poll_group_000", 00:18:33.031 "listen_address": { 00:18:33.031 "trtype": "TCP", 00:18:33.031 "adrfam": "IPv4", 00:18:33.031 "traddr": "10.0.0.2", 00:18:33.031 "trsvcid": "4420" 00:18:33.031 }, 00:18:33.031 "peer_address": { 00:18:33.031 "trtype": "TCP", 00:18:33.031 "adrfam": "IPv4", 00:18:33.031 "traddr": "10.0.0.1", 00:18:33.031 "trsvcid": "41966" 00:18:33.031 }, 00:18:33.031 "auth": { 00:18:33.031 "state": "completed", 00:18:33.031 "digest": "sha256", 00:18:33.031 "dhgroup": "null" 00:18:33.031 } 00:18:33.031 } 00:18:33.031 ]' 00:18:33.031 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.031 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.031 19:14:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.031 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:33.031 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.031 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.031 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.031 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.290 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:18:33.860 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.861 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:33.861 19:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.861 19:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.861 19:14:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.861 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.861 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:33.861 19:14:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.120 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.379 00:18:34.379 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.379 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.379 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.639 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.639 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.639 19:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.639 19:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.639 19:14:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.639 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.639 { 00:18:34.639 "cntlid": 5, 00:18:34.639 "qid": 0, 00:18:34.639 "state": "enabled", 00:18:34.639 "thread": "nvmf_tgt_poll_group_000", 00:18:34.639 "listen_address": { 00:18:34.639 "trtype": "TCP", 00:18:34.639 "adrfam": "IPv4", 00:18:34.639 "traddr": "10.0.0.2", 00:18:34.639 "trsvcid": "4420" 00:18:34.639 }, 00:18:34.639 "peer_address": { 00:18:34.639 "trtype": "TCP", 00:18:34.639 "adrfam": "IPv4", 00:18:34.639 "traddr": "10.0.0.1", 00:18:34.639 "trsvcid": "41994" 00:18:34.639 }, 00:18:34.639 "auth": { 00:18:34.639 "state": "completed", 00:18:34.639 "digest": "sha256", 00:18:34.639 "dhgroup": "null" 00:18:34.639 } 00:18:34.639 } 00:18:34.639 ]' 00:18:34.639 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.639 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.640 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.640 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:34.640 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.640 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.640 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.640 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.899 19:14:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.840 19:14:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.101 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.101 { 00:18:36.101 "cntlid": 7, 00:18:36.101 "qid": 0, 00:18:36.101 "state": "enabled", 00:18:36.101 "thread": "nvmf_tgt_poll_group_000", 00:18:36.101 "listen_address": { 00:18:36.101 "trtype": "TCP", 00:18:36.101 "adrfam": "IPv4", 00:18:36.101 "traddr": "10.0.0.2", 00:18:36.101 "trsvcid": "4420" 00:18:36.101 }, 00:18:36.101 "peer_address": { 00:18:36.101 "trtype": "TCP", 00:18:36.101 "adrfam": "IPv4", 00:18:36.101 "traddr": "10.0.0.1", 00:18:36.101 "trsvcid": "42024" 00:18:36.101 }, 00:18:36.101 "auth": { 00:18:36.101 "state": "completed", 00:18:36.101 "digest": "sha256", 00:18:36.101 "dhgroup": "null" 00:18:36.101 } 00:18:36.101 } 00:18:36.101 ]' 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.101 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.361 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:36.361 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.361 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.361 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.361 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.361 19:14:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.305 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:37.565 00:18:37.565 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.565 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.565 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.827 { 00:18:37.827 "cntlid": 9, 00:18:37.827 "qid": 0, 00:18:37.827 "state": "enabled", 00:18:37.827 "thread": "nvmf_tgt_poll_group_000", 00:18:37.827 "listen_address": { 00:18:37.827 "trtype": "TCP", 00:18:37.827 "adrfam": "IPv4", 00:18:37.827 "traddr": "10.0.0.2", 00:18:37.827 "trsvcid": "4420" 00:18:37.827 }, 00:18:37.827 "peer_address": { 00:18:37.827 "trtype": "TCP", 00:18:37.827 "adrfam": "IPv4", 00:18:37.827 "traddr": "10.0.0.1", 00:18:37.827 "trsvcid": "42052" 00:18:37.827 }, 00:18:37.827 "auth": { 00:18:37.827 "state": "completed", 00:18:37.827 "digest": "sha256", 00:18:37.827 "dhgroup": "ffdhe2048" 00:18:37.827 } 00:18:37.827 } 00:18:37.827 ]' 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.827 19:14:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.088 19:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:18:39.028 19:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.028 19:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:39.028 19:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.028 19:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.028 19:14:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.028 19:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:39.028 19:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.028 19:14:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.028 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:39.288 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.288 { 00:18:39.288 "cntlid": 11, 00:18:39.288 "qid": 0, 00:18:39.288 "state": "enabled", 00:18:39.288 "thread": "nvmf_tgt_poll_group_000", 00:18:39.288 "listen_address": { 00:18:39.288 "trtype": "TCP", 00:18:39.288 "adrfam": "IPv4", 00:18:39.288 "traddr": "10.0.0.2", 00:18:39.288 "trsvcid": "4420" 00:18:39.288 }, 00:18:39.288 "peer_address": { 00:18:39.288 "trtype": "TCP", 00:18:39.288 "adrfam": "IPv4", 00:18:39.288 "traddr": "10.0.0.1", 00:18:39.288 "trsvcid": "42084" 00:18:39.288 }, 00:18:39.288 "auth": { 00:18:39.288 "state": "completed", 00:18:39.288 "digest": "sha256", 00:18:39.288 "dhgroup": "ffdhe2048" 00:18:39.288 } 00:18:39.288 } 00:18:39.288 ]' 00:18:39.288 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.548 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.548 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.548 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:39.548 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.548 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.548 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.548 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.808 19:14:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:18:40.378 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.378 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.378 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:40.378 19:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.378 19:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.378 19:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.378 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.378 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.378 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.638 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:40.897 00:18:40.897 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.897 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.897 19:14:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:41.157 { 00:18:41.157 "cntlid": 13, 00:18:41.157 "qid": 0, 00:18:41.157 "state": "enabled", 00:18:41.157 "thread": "nvmf_tgt_poll_group_000", 00:18:41.157 "listen_address": { 00:18:41.157 "trtype": "TCP", 00:18:41.157 "adrfam": "IPv4", 00:18:41.157 "traddr": "10.0.0.2", 00:18:41.157 "trsvcid": "4420" 00:18:41.157 }, 00:18:41.157 "peer_address": { 00:18:41.157 "trtype": "TCP", 00:18:41.157 "adrfam": "IPv4", 00:18:41.157 "traddr": "10.0.0.1", 00:18:41.157 "trsvcid": "42098" 00:18:41.157 }, 00:18:41.157 "auth": { 00:18:41.157 "state": "completed", 00:18:41.157 "digest": "sha256", 00:18:41.157 "dhgroup": "ffdhe2048" 00:18:41.157 } 00:18:41.157 } 00:18:41.157 ]' 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:41.157 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.416 19:14:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:18:41.984 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.984 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:41.984 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:41.984 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.243 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.244 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:42.503 00:18:42.503 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.503 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.503 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.763 { 00:18:42.763 "cntlid": 15, 00:18:42.763 "qid": 0, 00:18:42.763 "state": "enabled", 00:18:42.763 "thread": "nvmf_tgt_poll_group_000", 00:18:42.763 "listen_address": { 00:18:42.763 "trtype": "TCP", 00:18:42.763 "adrfam": "IPv4", 00:18:42.763 "traddr": "10.0.0.2", 00:18:42.763 "trsvcid": "4420" 00:18:42.763 }, 00:18:42.763 "peer_address": { 00:18:42.763 "trtype": "TCP", 00:18:42.763 "adrfam": "IPv4", 00:18:42.763 "traddr": "10.0.0.1", 00:18:42.763 "trsvcid": "50598" 00:18:42.763 }, 00:18:42.763 "auth": { 00:18:42.763 "state": "completed", 00:18:42.763 "digest": "sha256", 00:18:42.763 "dhgroup": "ffdhe2048" 00:18:42.763 } 00:18:42.763 } 00:18:42.763 ]' 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.763 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.023 19:14:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:18:43.593 19:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.854 19:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:43.854 19:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.854 19:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.854 19:14:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.854 19:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:43.854 19:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.854 19:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:43.854 19:14:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.115 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:44.115 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.376 { 00:18:44.376 "cntlid": 17, 00:18:44.376 "qid": 0, 00:18:44.376 "state": "enabled", 00:18:44.376 "thread": "nvmf_tgt_poll_group_000", 00:18:44.376 "listen_address": { 00:18:44.376 "trtype": "TCP", 00:18:44.376 "adrfam": "IPv4", 00:18:44.376 "traddr": "10.0.0.2", 00:18:44.376 "trsvcid": "4420" 00:18:44.376 }, 00:18:44.376 "peer_address": { 00:18:44.376 "trtype": "TCP", 00:18:44.376 "adrfam": "IPv4", 00:18:44.376 "traddr": "10.0.0.1", 00:18:44.376 "trsvcid": "50620" 00:18:44.376 }, 00:18:44.376 "auth": { 00:18:44.376 "state": "completed", 00:18:44.376 "digest": "sha256", 00:18:44.376 "dhgroup": "ffdhe3072" 00:18:44.376 } 00:18:44.376 } 00:18:44.376 ]' 00:18:44.376 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.637 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.637 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.637 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:44.637 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.637 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.637 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.637 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.638 19:14:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.580 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:45.840 00:18:45.840 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.840 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.840 19:14:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.101 { 00:18:46.101 "cntlid": 19, 00:18:46.101 "qid": 0, 00:18:46.101 "state": "enabled", 00:18:46.101 "thread": "nvmf_tgt_poll_group_000", 00:18:46.101 "listen_address": { 00:18:46.101 "trtype": "TCP", 00:18:46.101 "adrfam": "IPv4", 00:18:46.101 "traddr": "10.0.0.2", 00:18:46.101 "trsvcid": "4420" 00:18:46.101 }, 00:18:46.101 "peer_address": { 00:18:46.101 "trtype": "TCP", 00:18:46.101 "adrfam": "IPv4", 00:18:46.101 "traddr": "10.0.0.1", 00:18:46.101 "trsvcid": "50642" 00:18:46.101 }, 00:18:46.101 "auth": { 00:18:46.101 "state": "completed", 00:18:46.101 "digest": "sha256", 00:18:46.101 "dhgroup": "ffdhe3072" 00:18:46.101 } 00:18:46.101 } 00:18:46.101 ]' 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.101 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.362 19:14:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.304 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.305 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:47.565 00:18:47.565 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.565 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.565 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.825 { 00:18:47.825 "cntlid": 21, 00:18:47.825 "qid": 0, 00:18:47.825 "state": "enabled", 00:18:47.825 "thread": "nvmf_tgt_poll_group_000", 00:18:47.825 "listen_address": { 00:18:47.825 "trtype": "TCP", 00:18:47.825 "adrfam": "IPv4", 00:18:47.825 "traddr": "10.0.0.2", 00:18:47.825 "trsvcid": "4420" 00:18:47.825 }, 00:18:47.825 "peer_address": { 00:18:47.825 "trtype": "TCP", 00:18:47.825 "adrfam": "IPv4", 00:18:47.825 "traddr": "10.0.0.1", 00:18:47.825 "trsvcid": "50668" 00:18:47.825 }, 00:18:47.825 "auth": { 00:18:47.825 "state": "completed", 00:18:47.825 "digest": "sha256", 00:18:47.825 "dhgroup": "ffdhe3072" 00:18:47.825 } 00:18:47.825 } 00:18:47.825 ]' 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.825 19:14:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.085 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:18:48.655 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.655 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:48.655 19:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.655 19:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.655 19:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.655 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.655 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:48.655 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.916 19:14:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.177 00:18:49.177 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.177 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.177 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.438 { 00:18:49.438 "cntlid": 23, 00:18:49.438 "qid": 0, 00:18:49.438 "state": "enabled", 00:18:49.438 "thread": "nvmf_tgt_poll_group_000", 00:18:49.438 "listen_address": { 00:18:49.438 "trtype": "TCP", 00:18:49.438 "adrfam": "IPv4", 00:18:49.438 "traddr": "10.0.0.2", 00:18:49.438 "trsvcid": "4420" 00:18:49.438 }, 00:18:49.438 "peer_address": { 00:18:49.438 "trtype": "TCP", 00:18:49.438 "adrfam": "IPv4", 00:18:49.438 "traddr": "10.0.0.1", 00:18:49.438 "trsvcid": "50698" 00:18:49.438 }, 00:18:49.438 "auth": { 00:18:49.438 "state": "completed", 00:18:49.438 "digest": "sha256", 00:18:49.438 "dhgroup": "ffdhe3072" 00:18:49.438 } 00:18:49.438 } 00:18:49.438 ]' 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.438 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.698 19:14:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.273 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.273 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.534 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.794 00:18:50.794 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.794 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.795 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:51.057 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:51.057 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:51.057 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.057 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.057 19:14:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.057 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:51.057 { 00:18:51.057 "cntlid": 25, 00:18:51.057 "qid": 0, 00:18:51.057 "state": "enabled", 00:18:51.057 "thread": "nvmf_tgt_poll_group_000", 00:18:51.057 "listen_address": { 00:18:51.057 "trtype": "TCP", 00:18:51.057 "adrfam": "IPv4", 00:18:51.057 "traddr": "10.0.0.2", 00:18:51.057 "trsvcid": "4420" 00:18:51.057 }, 00:18:51.057 "peer_address": { 00:18:51.057 "trtype": "TCP", 00:18:51.057 "adrfam": "IPv4", 00:18:51.057 "traddr": "10.0.0.1", 00:18:51.057 "trsvcid": "50734" 00:18:51.057 }, 00:18:51.057 "auth": { 00:18:51.057 "state": "completed", 00:18:51.057 "digest": "sha256", 00:18:51.057 "dhgroup": "ffdhe4096" 00:18:51.057 } 00:18:51.057 } 00:18:51.057 ]' 00:18:51.057 19:14:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:51.057 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:51.057 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:51.057 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:51.057 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:51.057 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:51.057 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:51.057 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.318 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:18:51.955 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.955 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:51.955 19:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:51.955 19:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.955 19:14:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:51.955 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.955 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:51.955 19:14:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.216 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.477 00:18:52.477 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.477 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.477 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.738 { 00:18:52.738 "cntlid": 27, 00:18:52.738 "qid": 0, 00:18:52.738 "state": "enabled", 00:18:52.738 "thread": "nvmf_tgt_poll_group_000", 00:18:52.738 "listen_address": { 00:18:52.738 "trtype": "TCP", 00:18:52.738 "adrfam": "IPv4", 00:18:52.738 "traddr": "10.0.0.2", 00:18:52.738 "trsvcid": "4420" 00:18:52.738 }, 00:18:52.738 "peer_address": { 00:18:52.738 "trtype": "TCP", 00:18:52.738 "adrfam": "IPv4", 00:18:52.738 "traddr": "10.0.0.1", 00:18:52.738 "trsvcid": "44050" 00:18:52.738 }, 00:18:52.738 "auth": { 00:18:52.738 "state": "completed", 00:18:52.738 "digest": "sha256", 00:18:52.738 "dhgroup": "ffdhe4096" 00:18:52.738 } 00:18:52.738 } 00:18:52.738 ]' 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.738 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.999 19:14:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:18:53.572 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.572 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.572 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:53.833 19:14:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.093 00:18:54.094 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.094 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.094 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.355 { 00:18:54.355 "cntlid": 29, 00:18:54.355 "qid": 0, 00:18:54.355 "state": "enabled", 00:18:54.355 "thread": "nvmf_tgt_poll_group_000", 00:18:54.355 "listen_address": { 00:18:54.355 "trtype": "TCP", 00:18:54.355 "adrfam": "IPv4", 00:18:54.355 "traddr": "10.0.0.2", 00:18:54.355 "trsvcid": "4420" 00:18:54.355 }, 00:18:54.355 "peer_address": { 00:18:54.355 "trtype": "TCP", 00:18:54.355 "adrfam": "IPv4", 00:18:54.355 "traddr": "10.0.0.1", 00:18:54.355 "trsvcid": "44070" 00:18:54.355 }, 00:18:54.355 "auth": { 00:18:54.355 "state": "completed", 00:18:54.355 "digest": "sha256", 00:18:54.355 "dhgroup": "ffdhe4096" 00:18:54.355 } 00:18:54.355 } 00:18:54.355 ]' 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.355 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:54.616 19:15:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.560 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.560 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:55.821 00:18:55.821 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.821 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.821 19:15:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.084 { 00:18:56.084 "cntlid": 31, 00:18:56.084 "qid": 0, 00:18:56.084 "state": "enabled", 00:18:56.084 "thread": "nvmf_tgt_poll_group_000", 00:18:56.084 "listen_address": { 00:18:56.084 "trtype": "TCP", 00:18:56.084 "adrfam": "IPv4", 00:18:56.084 "traddr": "10.0.0.2", 00:18:56.084 "trsvcid": "4420" 00:18:56.084 }, 00:18:56.084 "peer_address": { 00:18:56.084 "trtype": "TCP", 00:18:56.084 "adrfam": "IPv4", 00:18:56.084 "traddr": "10.0.0.1", 00:18:56.084 "trsvcid": "44094" 00:18:56.084 }, 00:18:56.084 "auth": { 00:18:56.084 "state": "completed", 00:18:56.084 "digest": "sha256", 00:18:56.084 "dhgroup": "ffdhe4096" 00:18:56.084 } 00:18:56.084 } 00:18:56.084 ]' 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.084 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.345 19:15:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.288 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.288 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.548 00:18:57.548 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.548 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.548 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:57.808 { 00:18:57.808 "cntlid": 33, 00:18:57.808 "qid": 0, 00:18:57.808 "state": "enabled", 00:18:57.808 "thread": "nvmf_tgt_poll_group_000", 00:18:57.808 "listen_address": { 00:18:57.808 "trtype": "TCP", 00:18:57.808 "adrfam": "IPv4", 00:18:57.808 "traddr": "10.0.0.2", 00:18:57.808 "trsvcid": "4420" 00:18:57.808 }, 00:18:57.808 "peer_address": { 00:18:57.808 "trtype": "TCP", 00:18:57.808 "adrfam": "IPv4", 00:18:57.808 "traddr": "10.0.0.1", 00:18:57.808 "trsvcid": "44120" 00:18:57.808 }, 00:18:57.808 "auth": { 00:18:57.808 "state": "completed", 00:18:57.808 "digest": "sha256", 00:18:57.808 "dhgroup": "ffdhe6144" 00:18:57.808 } 00:18:57.808 } 00:18:57.808 ]' 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.808 19:15:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.068 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.010 19:15:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.010 19:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.010 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.010 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.271 00:18:59.271 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.271 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.271 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.532 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.532 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.532 19:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.532 19:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.532 19:15:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.532 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.532 { 00:18:59.532 "cntlid": 35, 00:18:59.532 "qid": 0, 00:18:59.532 "state": "enabled", 00:18:59.532 "thread": "nvmf_tgt_poll_group_000", 00:18:59.532 "listen_address": { 00:18:59.532 "trtype": "TCP", 00:18:59.533 "adrfam": "IPv4", 00:18:59.533 "traddr": "10.0.0.2", 00:18:59.533 "trsvcid": "4420" 00:18:59.533 }, 00:18:59.533 "peer_address": { 00:18:59.533 "trtype": "TCP", 00:18:59.533 "adrfam": "IPv4", 00:18:59.533 "traddr": "10.0.0.1", 00:18:59.533 "trsvcid": "44154" 00:18:59.533 }, 00:18:59.533 "auth": { 00:18:59.533 "state": "completed", 00:18:59.533 "digest": "sha256", 00:18:59.533 "dhgroup": "ffdhe6144" 00:18:59.533 } 00:18:59.533 } 00:18:59.533 ]' 00:18:59.533 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.533 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:59.533 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:59.533 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:59.533 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:59.794 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.794 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.794 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.794 19:15:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.735 19:15:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:00.995 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.256 { 00:19:01.256 "cntlid": 37, 00:19:01.256 "qid": 0, 00:19:01.256 "state": "enabled", 00:19:01.256 "thread": "nvmf_tgt_poll_group_000", 00:19:01.256 "listen_address": { 00:19:01.256 "trtype": "TCP", 00:19:01.256 "adrfam": "IPv4", 00:19:01.256 "traddr": "10.0.0.2", 00:19:01.256 "trsvcid": "4420" 00:19:01.256 }, 00:19:01.256 "peer_address": { 00:19:01.256 "trtype": "TCP", 00:19:01.256 "adrfam": "IPv4", 00:19:01.256 "traddr": "10.0.0.1", 00:19:01.256 "trsvcid": "44172" 00:19:01.256 }, 00:19:01.256 "auth": { 00:19:01.256 "state": "completed", 00:19:01.256 "digest": "sha256", 00:19:01.256 "dhgroup": "ffdhe6144" 00:19:01.256 } 00:19:01.256 } 00:19:01.256 ]' 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:01.256 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.517 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:01.517 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.517 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.517 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.517 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.517 19:15:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:02.458 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:03.028 00:19:03.028 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.028 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.028 19:15:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.028 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.028 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.028 19:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.028 19:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.028 19:15:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.028 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.028 { 00:19:03.028 "cntlid": 39, 00:19:03.028 "qid": 0, 00:19:03.028 "state": "enabled", 00:19:03.028 "thread": "nvmf_tgt_poll_group_000", 00:19:03.028 "listen_address": { 00:19:03.028 "trtype": "TCP", 00:19:03.028 "adrfam": "IPv4", 00:19:03.028 "traddr": "10.0.0.2", 00:19:03.028 "trsvcid": "4420" 00:19:03.028 }, 00:19:03.028 "peer_address": { 00:19:03.028 "trtype": "TCP", 00:19:03.028 "adrfam": "IPv4", 00:19:03.028 "traddr": "10.0.0.1", 00:19:03.028 "trsvcid": "49512" 00:19:03.028 }, 00:19:03.028 "auth": { 00:19:03.029 "state": "completed", 00:19:03.029 "digest": "sha256", 00:19:03.029 "dhgroup": "ffdhe6144" 00:19:03.029 } 00:19:03.029 } 00:19:03.029 ]' 00:19:03.029 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.029 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:03.029 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.029 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:03.289 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.289 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.289 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.289 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.289 19:15:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:04.230 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.231 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:04.803 00:19:04.803 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.803 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.803 19:15:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:05.068 { 00:19:05.068 "cntlid": 41, 00:19:05.068 "qid": 0, 00:19:05.068 "state": "enabled", 00:19:05.068 "thread": "nvmf_tgt_poll_group_000", 00:19:05.068 "listen_address": { 00:19:05.068 "trtype": "TCP", 00:19:05.068 "adrfam": "IPv4", 00:19:05.068 "traddr": "10.0.0.2", 00:19:05.068 "trsvcid": "4420" 00:19:05.068 }, 00:19:05.068 "peer_address": { 00:19:05.068 "trtype": "TCP", 00:19:05.068 "adrfam": "IPv4", 00:19:05.068 "traddr": "10.0.0.1", 00:19:05.068 "trsvcid": "49548" 00:19:05.068 }, 00:19:05.068 "auth": { 00:19:05.068 "state": "completed", 00:19:05.068 "digest": "sha256", 00:19:05.068 "dhgroup": "ffdhe8192" 00:19:05.068 } 00:19:05.068 } 00:19:05.068 ]' 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:05.068 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.328 19:15:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:06.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.271 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.842 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.842 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.842 { 00:19:06.842 "cntlid": 43, 00:19:06.842 "qid": 0, 00:19:06.842 "state": "enabled", 00:19:06.842 "thread": "nvmf_tgt_poll_group_000", 00:19:06.842 "listen_address": { 00:19:06.842 "trtype": "TCP", 00:19:06.842 "adrfam": "IPv4", 00:19:06.842 "traddr": "10.0.0.2", 00:19:06.842 "trsvcid": "4420" 00:19:06.842 }, 00:19:06.843 "peer_address": { 00:19:06.843 "trtype": "TCP", 00:19:06.843 "adrfam": "IPv4", 00:19:06.843 "traddr": "10.0.0.1", 00:19:06.843 "trsvcid": "49572" 00:19:06.843 }, 00:19:06.843 "auth": { 00:19:06.843 "state": "completed", 00:19:06.843 "digest": "sha256", 00:19:06.843 "dhgroup": "ffdhe8192" 00:19:06.843 } 00:19:06.843 } 00:19:06.843 ]' 00:19:07.104 19:15:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.104 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:07.104 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.104 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:07.104 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.104 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.104 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.104 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.104 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:08.046 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.046 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:08.046 19:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.046 19:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.046 19:15:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.046 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.046 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:08.046 19:15:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:08.046 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:08.046 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.046 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.046 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:08.046 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:08.047 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.047 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.047 19:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.047 19:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.047 19:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.047 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.047 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.617 00:19:08.617 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.617 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.618 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.878 { 00:19:08.878 "cntlid": 45, 00:19:08.878 "qid": 0, 00:19:08.878 "state": "enabled", 00:19:08.878 "thread": "nvmf_tgt_poll_group_000", 00:19:08.878 "listen_address": { 00:19:08.878 "trtype": "TCP", 00:19:08.878 "adrfam": "IPv4", 00:19:08.878 "traddr": "10.0.0.2", 00:19:08.878 "trsvcid": "4420" 00:19:08.878 }, 00:19:08.878 "peer_address": { 00:19:08.878 "trtype": "TCP", 00:19:08.878 "adrfam": "IPv4", 00:19:08.878 "traddr": "10.0.0.1", 00:19:08.878 "trsvcid": "49606" 00:19:08.878 }, 00:19:08.878 "auth": { 00:19:08.878 "state": "completed", 00:19:08.878 "digest": "sha256", 00:19:08.878 "dhgroup": "ffdhe8192" 00:19:08.878 } 00:19:08.878 } 00:19:08.878 ]' 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.878 19:15:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.139 19:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:10.082 19:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.082 19:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:10.082 19:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.082 19:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.082 19:15:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.082 19:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.082 19:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:10.082 19:15:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.082 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.655 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:10.656 { 00:19:10.656 "cntlid": 47, 00:19:10.656 "qid": 0, 00:19:10.656 "state": "enabled", 00:19:10.656 "thread": "nvmf_tgt_poll_group_000", 00:19:10.656 "listen_address": { 00:19:10.656 "trtype": "TCP", 00:19:10.656 "adrfam": "IPv4", 00:19:10.656 "traddr": "10.0.0.2", 00:19:10.656 "trsvcid": "4420" 00:19:10.656 }, 00:19:10.656 "peer_address": { 00:19:10.656 "trtype": "TCP", 00:19:10.656 "adrfam": "IPv4", 00:19:10.656 "traddr": "10.0.0.1", 00:19:10.656 "trsvcid": "49636" 00:19:10.656 }, 00:19:10.656 "auth": { 00:19:10.656 "state": "completed", 00:19:10.656 "digest": "sha256", 00:19:10.656 "dhgroup": "ffdhe8192" 00:19:10.656 } 00:19:10.656 } 00:19:10.656 ]' 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:10.656 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:10.918 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:10.918 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:10.918 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.918 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.918 19:15:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.918 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:11.864 19:15:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.126 00:19:12.126 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.126 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.126 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.387 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.387 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.387 19:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.387 19:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.387 19:15:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.387 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.387 { 00:19:12.387 "cntlid": 49, 00:19:12.387 "qid": 0, 00:19:12.387 "state": "enabled", 00:19:12.387 "thread": "nvmf_tgt_poll_group_000", 00:19:12.387 "listen_address": { 00:19:12.387 "trtype": "TCP", 00:19:12.387 "adrfam": "IPv4", 00:19:12.387 "traddr": "10.0.0.2", 00:19:12.387 "trsvcid": "4420" 00:19:12.387 }, 00:19:12.387 "peer_address": { 00:19:12.387 "trtype": "TCP", 00:19:12.387 "adrfam": "IPv4", 00:19:12.388 "traddr": "10.0.0.1", 00:19:12.388 "trsvcid": "56214" 00:19:12.388 }, 00:19:12.388 "auth": { 00:19:12.388 "state": "completed", 00:19:12.388 "digest": "sha384", 00:19:12.388 "dhgroup": "null" 00:19:12.388 } 00:19:12.388 } 00:19:12.388 ]' 00:19:12.388 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.388 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.388 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.388 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:12.388 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.388 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.388 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.388 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:12.649 19:15:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.594 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.594 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.855 00:19:13.855 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.855 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.855 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.855 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.115 19:15:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.115 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.115 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.115 19:15:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.115 { 00:19:14.115 "cntlid": 51, 00:19:14.115 "qid": 0, 00:19:14.115 "state": "enabled", 00:19:14.115 "thread": "nvmf_tgt_poll_group_000", 00:19:14.115 "listen_address": { 00:19:14.115 "trtype": "TCP", 00:19:14.115 "adrfam": "IPv4", 00:19:14.115 "traddr": "10.0.0.2", 00:19:14.115 "trsvcid": "4420" 00:19:14.115 }, 00:19:14.115 "peer_address": { 00:19:14.115 "trtype": "TCP", 00:19:14.115 "adrfam": "IPv4", 00:19:14.115 "traddr": "10.0.0.1", 00:19:14.115 "trsvcid": "56238" 00:19:14.115 }, 00:19:14.115 "auth": { 00:19:14.115 "state": "completed", 00:19:14.115 "digest": "sha384", 00:19:14.115 "dhgroup": "null" 00:19:14.115 } 00:19:14.115 } 00:19:14.115 ]' 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.115 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.376 19:15:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:14.945 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.945 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.945 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:14.945 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.945 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.209 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.515 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.515 { 00:19:15.515 "cntlid": 53, 00:19:15.515 "qid": 0, 00:19:15.515 "state": "enabled", 00:19:15.515 "thread": "nvmf_tgt_poll_group_000", 00:19:15.515 "listen_address": { 00:19:15.515 "trtype": "TCP", 00:19:15.515 "adrfam": "IPv4", 00:19:15.515 "traddr": "10.0.0.2", 00:19:15.515 "trsvcid": "4420" 00:19:15.515 }, 00:19:15.515 "peer_address": { 00:19:15.515 "trtype": "TCP", 00:19:15.515 "adrfam": "IPv4", 00:19:15.515 "traddr": "10.0.0.1", 00:19:15.515 "trsvcid": "56280" 00:19:15.515 }, 00:19:15.515 "auth": { 00:19:15.515 "state": "completed", 00:19:15.515 "digest": "sha384", 00:19:15.515 "dhgroup": "null" 00:19:15.515 } 00:19:15.515 } 00:19:15.515 ]' 00:19:15.515 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.796 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.796 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.796 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:15.796 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.796 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.796 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.796 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.796 19:15:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.735 19:15:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:16.995 00:19:16.995 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:16.995 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:16.995 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.256 { 00:19:17.256 "cntlid": 55, 00:19:17.256 "qid": 0, 00:19:17.256 "state": "enabled", 00:19:17.256 "thread": "nvmf_tgt_poll_group_000", 00:19:17.256 "listen_address": { 00:19:17.256 "trtype": "TCP", 00:19:17.256 "adrfam": "IPv4", 00:19:17.256 "traddr": "10.0.0.2", 00:19:17.256 "trsvcid": "4420" 00:19:17.256 }, 00:19:17.256 "peer_address": { 00:19:17.256 "trtype": "TCP", 00:19:17.256 "adrfam": "IPv4", 00:19:17.256 "traddr": "10.0.0.1", 00:19:17.256 "trsvcid": "56302" 00:19:17.256 }, 00:19:17.256 "auth": { 00:19:17.256 "state": "completed", 00:19:17.256 "digest": "sha384", 00:19:17.256 "dhgroup": "null" 00:19:17.256 } 00:19:17.256 } 00:19:17.256 ]' 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.256 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.516 19:15:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.459 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.720 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.720 { 00:19:18.720 "cntlid": 57, 00:19:18.720 "qid": 0, 00:19:18.720 "state": "enabled", 00:19:18.720 "thread": "nvmf_tgt_poll_group_000", 00:19:18.720 "listen_address": { 00:19:18.720 "trtype": "TCP", 00:19:18.720 "adrfam": "IPv4", 00:19:18.720 "traddr": "10.0.0.2", 00:19:18.720 "trsvcid": "4420" 00:19:18.720 }, 00:19:18.720 "peer_address": { 00:19:18.720 "trtype": "TCP", 00:19:18.720 "adrfam": "IPv4", 00:19:18.720 "traddr": "10.0.0.1", 00:19:18.720 "trsvcid": "56328" 00:19:18.720 }, 00:19:18.720 "auth": { 00:19:18.720 "state": "completed", 00:19:18.720 "digest": "sha384", 00:19:18.720 "dhgroup": "ffdhe2048" 00:19:18.720 } 00:19:18.720 } 00:19:18.720 ]' 00:19:18.720 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.981 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:18.981 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.981 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:18.981 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.981 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.981 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.981 19:15:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.241 19:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:19:19.814 19:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.814 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.814 19:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:19.814 19:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.814 19:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.814 19:15:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.814 19:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.814 19:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:19.814 19:15:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.075 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.336 00:19:20.336 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.336 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.336 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.597 { 00:19:20.597 "cntlid": 59, 00:19:20.597 "qid": 0, 00:19:20.597 "state": "enabled", 00:19:20.597 "thread": "nvmf_tgt_poll_group_000", 00:19:20.597 "listen_address": { 00:19:20.597 "trtype": "TCP", 00:19:20.597 "adrfam": "IPv4", 00:19:20.597 "traddr": "10.0.0.2", 00:19:20.597 "trsvcid": "4420" 00:19:20.597 }, 00:19:20.597 "peer_address": { 00:19:20.597 "trtype": "TCP", 00:19:20.597 "adrfam": "IPv4", 00:19:20.597 "traddr": "10.0.0.1", 00:19:20.597 "trsvcid": "56356" 00:19:20.597 }, 00:19:20.597 "auth": { 00:19:20.597 "state": "completed", 00:19:20.597 "digest": "sha384", 00:19:20.597 "dhgroup": "ffdhe2048" 00:19:20.597 } 00:19:20.597 } 00:19:20.597 ]' 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.597 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.858 19:15:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:21.429 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.697 19:15:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.958 00:19:21.958 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.958 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.958 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.218 { 00:19:22.218 "cntlid": 61, 00:19:22.218 "qid": 0, 00:19:22.218 "state": "enabled", 00:19:22.218 "thread": "nvmf_tgt_poll_group_000", 00:19:22.218 "listen_address": { 00:19:22.218 "trtype": "TCP", 00:19:22.218 "adrfam": "IPv4", 00:19:22.218 "traddr": "10.0.0.2", 00:19:22.218 "trsvcid": "4420" 00:19:22.218 }, 00:19:22.218 "peer_address": { 00:19:22.218 "trtype": "TCP", 00:19:22.218 "adrfam": "IPv4", 00:19:22.218 "traddr": "10.0.0.1", 00:19:22.218 "trsvcid": "34582" 00:19:22.218 }, 00:19:22.218 "auth": { 00:19:22.218 "state": "completed", 00:19:22.218 "digest": "sha384", 00:19:22.218 "dhgroup": "ffdhe2048" 00:19:22.218 } 00:19:22.218 } 00:19:22.218 ]' 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.218 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.219 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.219 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.479 19:15:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.421 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.682 00:19:23.682 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.682 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.682 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.682 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.944 { 00:19:23.944 "cntlid": 63, 00:19:23.944 "qid": 0, 00:19:23.944 "state": "enabled", 00:19:23.944 "thread": "nvmf_tgt_poll_group_000", 00:19:23.944 "listen_address": { 00:19:23.944 "trtype": "TCP", 00:19:23.944 "adrfam": "IPv4", 00:19:23.944 "traddr": "10.0.0.2", 00:19:23.944 "trsvcid": "4420" 00:19:23.944 }, 00:19:23.944 "peer_address": { 00:19:23.944 "trtype": "TCP", 00:19:23.944 "adrfam": "IPv4", 00:19:23.944 "traddr": "10.0.0.1", 00:19:23.944 "trsvcid": "34614" 00:19:23.944 }, 00:19:23.944 "auth": { 00:19:23.944 "state": "completed", 00:19:23.944 "digest": "sha384", 00:19:23.944 "dhgroup": "ffdhe2048" 00:19:23.944 } 00:19:23.944 } 00:19:23.944 ]' 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.944 19:15:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.205 19:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.776 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:24.776 19:15:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.037 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.298 00:19:25.298 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.298 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.298 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.559 { 00:19:25.559 "cntlid": 65, 00:19:25.559 "qid": 0, 00:19:25.559 "state": "enabled", 00:19:25.559 "thread": "nvmf_tgt_poll_group_000", 00:19:25.559 "listen_address": { 00:19:25.559 "trtype": "TCP", 00:19:25.559 "adrfam": "IPv4", 00:19:25.559 "traddr": "10.0.0.2", 00:19:25.559 "trsvcid": "4420" 00:19:25.559 }, 00:19:25.559 "peer_address": { 00:19:25.559 "trtype": "TCP", 00:19:25.559 "adrfam": "IPv4", 00:19:25.559 "traddr": "10.0.0.1", 00:19:25.559 "trsvcid": "34634" 00:19:25.559 }, 00:19:25.559 "auth": { 00:19:25.559 "state": "completed", 00:19:25.559 "digest": "sha384", 00:19:25.559 "dhgroup": "ffdhe3072" 00:19:25.559 } 00:19:25.559 } 00:19:25.559 ]' 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.559 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.819 19:15:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.762 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.762 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.022 00:19:27.022 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.022 19:15:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.022 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.281 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.281 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.281 19:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.282 { 00:19:27.282 "cntlid": 67, 00:19:27.282 "qid": 0, 00:19:27.282 "state": "enabled", 00:19:27.282 "thread": "nvmf_tgt_poll_group_000", 00:19:27.282 "listen_address": { 00:19:27.282 "trtype": "TCP", 00:19:27.282 "adrfam": "IPv4", 00:19:27.282 "traddr": "10.0.0.2", 00:19:27.282 "trsvcid": "4420" 00:19:27.282 }, 00:19:27.282 "peer_address": { 00:19:27.282 "trtype": "TCP", 00:19:27.282 "adrfam": "IPv4", 00:19:27.282 "traddr": "10.0.0.1", 00:19:27.282 "trsvcid": "34640" 00:19:27.282 }, 00:19:27.282 "auth": { 00:19:27.282 "state": "completed", 00:19:27.282 "digest": "sha384", 00:19:27.282 "dhgroup": "ffdhe3072" 00:19:27.282 } 00:19:27.282 } 00:19:27.282 ]' 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.282 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.541 19:15:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:28.109 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.368 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.368 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.628 00:19:28.628 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.628 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.628 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.887 { 00:19:28.887 "cntlid": 69, 00:19:28.887 "qid": 0, 00:19:28.887 "state": "enabled", 00:19:28.887 "thread": "nvmf_tgt_poll_group_000", 00:19:28.887 "listen_address": { 00:19:28.887 "trtype": "TCP", 00:19:28.887 "adrfam": "IPv4", 00:19:28.887 "traddr": "10.0.0.2", 00:19:28.887 "trsvcid": "4420" 00:19:28.887 }, 00:19:28.887 "peer_address": { 00:19:28.887 "trtype": "TCP", 00:19:28.887 "adrfam": "IPv4", 00:19:28.887 "traddr": "10.0.0.1", 00:19:28.887 "trsvcid": "34666" 00:19:28.887 }, 00:19:28.887 "auth": { 00:19:28.887 "state": "completed", 00:19:28.887 "digest": "sha384", 00:19:28.887 "dhgroup": "ffdhe3072" 00:19:28.887 } 00:19:28.887 } 00:19:28.887 ]' 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.887 19:15:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.146 19:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:30.084 19:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.084 19:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:30.084 19:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.084 19:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.084 19:15:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.084 19:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.084 19:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:30.084 19:15:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.084 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.343 00:19:30.343 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.343 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.343 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.603 { 00:19:30.603 "cntlid": 71, 00:19:30.603 "qid": 0, 00:19:30.603 "state": "enabled", 00:19:30.603 "thread": "nvmf_tgt_poll_group_000", 00:19:30.603 "listen_address": { 00:19:30.603 "trtype": "TCP", 00:19:30.603 "adrfam": "IPv4", 00:19:30.603 "traddr": "10.0.0.2", 00:19:30.603 "trsvcid": "4420" 00:19:30.603 }, 00:19:30.603 "peer_address": { 00:19:30.603 "trtype": "TCP", 00:19:30.603 "adrfam": "IPv4", 00:19:30.603 "traddr": "10.0.0.1", 00:19:30.603 "trsvcid": "34684" 00:19:30.603 }, 00:19:30.603 "auth": { 00:19:30.603 "state": "completed", 00:19:30.603 "digest": "sha384", 00:19:30.603 "dhgroup": "ffdhe3072" 00:19:30.603 } 00:19:30.603 } 00:19:30.603 ]' 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.603 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.864 19:15:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:31.433 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.692 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:31.692 19:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.692 19:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.692 19:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.692 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.692 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.693 19:15:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.952 00:19:31.952 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.952 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.952 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.212 { 00:19:32.212 "cntlid": 73, 00:19:32.212 "qid": 0, 00:19:32.212 "state": "enabled", 00:19:32.212 "thread": "nvmf_tgt_poll_group_000", 00:19:32.212 "listen_address": { 00:19:32.212 "trtype": "TCP", 00:19:32.212 "adrfam": "IPv4", 00:19:32.212 "traddr": "10.0.0.2", 00:19:32.212 "trsvcid": "4420" 00:19:32.212 }, 00:19:32.212 "peer_address": { 00:19:32.212 "trtype": "TCP", 00:19:32.212 "adrfam": "IPv4", 00:19:32.212 "traddr": "10.0.0.1", 00:19:32.212 "trsvcid": "48152" 00:19:32.212 }, 00:19:32.212 "auth": { 00:19:32.212 "state": "completed", 00:19:32.212 "digest": "sha384", 00:19:32.212 "dhgroup": "ffdhe4096" 00:19:32.212 } 00:19:32.212 } 00:19:32.212 ]' 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:32.212 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.472 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.472 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.472 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.472 19:15:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.413 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.673 00:19:33.673 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.673 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.673 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.932 { 00:19:33.932 "cntlid": 75, 00:19:33.932 "qid": 0, 00:19:33.932 "state": "enabled", 00:19:33.932 "thread": "nvmf_tgt_poll_group_000", 00:19:33.932 "listen_address": { 00:19:33.932 "trtype": "TCP", 00:19:33.932 "adrfam": "IPv4", 00:19:33.932 "traddr": "10.0.0.2", 00:19:33.932 "trsvcid": "4420" 00:19:33.932 }, 00:19:33.932 "peer_address": { 00:19:33.932 "trtype": "TCP", 00:19:33.932 "adrfam": "IPv4", 00:19:33.932 "traddr": "10.0.0.1", 00:19:33.932 "trsvcid": "48188" 00:19:33.932 }, 00:19:33.932 "auth": { 00:19:33.932 "state": "completed", 00:19:33.932 "digest": "sha384", 00:19:33.932 "dhgroup": "ffdhe4096" 00:19:33.932 } 00:19:33.932 } 00:19:33.932 ]' 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:33.932 19:15:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.932 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.932 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.932 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.191 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:35.132 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.132 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:35.132 19:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.132 19:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.132 19:15:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.132 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.132 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:35.132 19:15:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.132 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.392 00:19:35.392 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.392 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.392 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.652 { 00:19:35.652 "cntlid": 77, 00:19:35.652 "qid": 0, 00:19:35.652 "state": "enabled", 00:19:35.652 "thread": "nvmf_tgt_poll_group_000", 00:19:35.652 "listen_address": { 00:19:35.652 "trtype": "TCP", 00:19:35.652 "adrfam": "IPv4", 00:19:35.652 "traddr": "10.0.0.2", 00:19:35.652 "trsvcid": "4420" 00:19:35.652 }, 00:19:35.652 "peer_address": { 00:19:35.652 "trtype": "TCP", 00:19:35.652 "adrfam": "IPv4", 00:19:35.652 "traddr": "10.0.0.1", 00:19:35.652 "trsvcid": "48214" 00:19:35.652 }, 00:19:35.652 "auth": { 00:19:35.652 "state": "completed", 00:19:35.652 "digest": "sha384", 00:19:35.652 "dhgroup": "ffdhe4096" 00:19:35.652 } 00:19:35.652 } 00:19:35.652 ]' 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.652 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.912 19:15:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:36.862 19:15:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.126 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.126 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:37.126 { 00:19:37.126 "cntlid": 79, 00:19:37.126 "qid": 0, 00:19:37.126 "state": "enabled", 00:19:37.126 "thread": "nvmf_tgt_poll_group_000", 00:19:37.126 "listen_address": { 00:19:37.126 "trtype": "TCP", 00:19:37.126 "adrfam": "IPv4", 00:19:37.126 "traddr": "10.0.0.2", 00:19:37.126 "trsvcid": "4420" 00:19:37.126 }, 00:19:37.126 "peer_address": { 00:19:37.126 "trtype": "TCP", 00:19:37.126 "adrfam": "IPv4", 00:19:37.126 "traddr": "10.0.0.1", 00:19:37.126 "trsvcid": "48244" 00:19:37.126 }, 00:19:37.126 "auth": { 00:19:37.126 "state": "completed", 00:19:37.126 "digest": "sha384", 00:19:37.126 "dhgroup": "ffdhe4096" 00:19:37.126 } 00:19:37.126 } 00:19:37.126 ]' 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.444 19:15:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.382 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.382 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.952 00:19:38.952 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.952 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.952 19:15:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.952 { 00:19:38.952 "cntlid": 81, 00:19:38.952 "qid": 0, 00:19:38.952 "state": "enabled", 00:19:38.952 "thread": "nvmf_tgt_poll_group_000", 00:19:38.952 "listen_address": { 00:19:38.952 "trtype": "TCP", 00:19:38.952 "adrfam": "IPv4", 00:19:38.952 "traddr": "10.0.0.2", 00:19:38.952 "trsvcid": "4420" 00:19:38.952 }, 00:19:38.952 "peer_address": { 00:19:38.952 "trtype": "TCP", 00:19:38.952 "adrfam": "IPv4", 00:19:38.952 "traddr": "10.0.0.1", 00:19:38.952 "trsvcid": "48272" 00:19:38.952 }, 00:19:38.952 "auth": { 00:19:38.952 "state": "completed", 00:19:38.952 "digest": "sha384", 00:19:38.952 "dhgroup": "ffdhe6144" 00:19:38.952 } 00:19:38.952 } 00:19:38.952 ]' 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.952 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.211 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:39.211 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.211 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.211 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.211 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.211 19:15:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.149 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.149 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.718 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.718 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.718 { 00:19:40.718 "cntlid": 83, 00:19:40.718 "qid": 0, 00:19:40.718 "state": "enabled", 00:19:40.718 "thread": "nvmf_tgt_poll_group_000", 00:19:40.718 "listen_address": { 00:19:40.718 "trtype": "TCP", 00:19:40.718 "adrfam": "IPv4", 00:19:40.718 "traddr": "10.0.0.2", 00:19:40.718 "trsvcid": "4420" 00:19:40.718 }, 00:19:40.718 "peer_address": { 00:19:40.718 "trtype": "TCP", 00:19:40.718 "adrfam": "IPv4", 00:19:40.718 "traddr": "10.0.0.1", 00:19:40.719 "trsvcid": "48302" 00:19:40.719 }, 00:19:40.719 "auth": { 00:19:40.719 "state": "completed", 00:19:40.719 "digest": "sha384", 00:19:40.719 "dhgroup": "ffdhe6144" 00:19:40.719 } 00:19:40.719 } 00:19:40.719 ]' 00:19:40.719 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.719 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:40.719 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.978 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.978 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.978 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.978 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.978 19:15:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.978 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:41.943 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.944 19:15:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.202 00:19:42.202 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.202 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.202 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.461 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.461 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.461 19:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.461 19:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.461 19:15:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.461 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.462 { 00:19:42.462 "cntlid": 85, 00:19:42.462 "qid": 0, 00:19:42.462 "state": "enabled", 00:19:42.462 "thread": "nvmf_tgt_poll_group_000", 00:19:42.462 "listen_address": { 00:19:42.462 "trtype": "TCP", 00:19:42.462 "adrfam": "IPv4", 00:19:42.462 "traddr": "10.0.0.2", 00:19:42.462 "trsvcid": "4420" 00:19:42.462 }, 00:19:42.462 "peer_address": { 00:19:42.462 "trtype": "TCP", 00:19:42.462 "adrfam": "IPv4", 00:19:42.462 "traddr": "10.0.0.1", 00:19:42.462 "trsvcid": "59810" 00:19:42.462 }, 00:19:42.462 "auth": { 00:19:42.462 "state": "completed", 00:19:42.462 "digest": "sha384", 00:19:42.462 "dhgroup": "ffdhe6144" 00:19:42.462 } 00:19:42.462 } 00:19:42.462 ]' 00:19:42.462 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.462 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.462 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.462 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:42.462 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.721 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.721 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.721 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.721 19:15:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.658 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.658 19:15:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:43.918 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.177 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.177 { 00:19:44.177 "cntlid": 87, 00:19:44.177 "qid": 0, 00:19:44.177 "state": "enabled", 00:19:44.178 "thread": "nvmf_tgt_poll_group_000", 00:19:44.178 "listen_address": { 00:19:44.178 "trtype": "TCP", 00:19:44.178 "adrfam": "IPv4", 00:19:44.178 "traddr": "10.0.0.2", 00:19:44.178 "trsvcid": "4420" 00:19:44.178 }, 00:19:44.178 "peer_address": { 00:19:44.178 "trtype": "TCP", 00:19:44.178 "adrfam": "IPv4", 00:19:44.178 "traddr": "10.0.0.1", 00:19:44.178 "trsvcid": "59820" 00:19:44.178 }, 00:19:44.178 "auth": { 00:19:44.178 "state": "completed", 00:19:44.178 "digest": "sha384", 00:19:44.178 "dhgroup": "ffdhe6144" 00:19:44.178 } 00:19:44.178 } 00:19:44.178 ]' 00:19:44.178 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:44.178 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:44.178 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:44.437 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:44.437 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:44.437 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:44.437 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:44.437 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.437 19:15:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.375 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.944 00:19:45.944 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.944 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.944 19:15:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.203 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.204 { 00:19:46.204 "cntlid": 89, 00:19:46.204 "qid": 0, 00:19:46.204 "state": "enabled", 00:19:46.204 "thread": "nvmf_tgt_poll_group_000", 00:19:46.204 "listen_address": { 00:19:46.204 "trtype": "TCP", 00:19:46.204 "adrfam": "IPv4", 00:19:46.204 "traddr": "10.0.0.2", 00:19:46.204 "trsvcid": "4420" 00:19:46.204 }, 00:19:46.204 "peer_address": { 00:19:46.204 "trtype": "TCP", 00:19:46.204 "adrfam": "IPv4", 00:19:46.204 "traddr": "10.0.0.1", 00:19:46.204 "trsvcid": "59852" 00:19:46.204 }, 00:19:46.204 "auth": { 00:19:46.204 "state": "completed", 00:19:46.204 "digest": "sha384", 00:19:46.204 "dhgroup": "ffdhe8192" 00:19:46.204 } 00:19:46.204 } 00:19:46.204 ]' 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.204 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.463 19:15:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:19:47.032 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.291 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.291 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.859 00:19:47.859 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.859 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.859 19:15:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.119 { 00:19:48.119 "cntlid": 91, 00:19:48.119 "qid": 0, 00:19:48.119 "state": "enabled", 00:19:48.119 "thread": "nvmf_tgt_poll_group_000", 00:19:48.119 "listen_address": { 00:19:48.119 "trtype": "TCP", 00:19:48.119 "adrfam": "IPv4", 00:19:48.119 "traddr": "10.0.0.2", 00:19:48.119 "trsvcid": "4420" 00:19:48.119 }, 00:19:48.119 "peer_address": { 00:19:48.119 "trtype": "TCP", 00:19:48.119 "adrfam": "IPv4", 00:19:48.119 "traddr": "10.0.0.1", 00:19:48.119 "trsvcid": "59882" 00:19:48.119 }, 00:19:48.119 "auth": { 00:19:48.119 "state": "completed", 00:19:48.119 "digest": "sha384", 00:19:48.119 "dhgroup": "ffdhe8192" 00:19:48.119 } 00:19:48.119 } 00:19:48.119 ]' 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.119 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.379 19:15:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.318 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.887 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.887 { 00:19:49.887 "cntlid": 93, 00:19:49.887 "qid": 0, 00:19:49.887 "state": "enabled", 00:19:49.887 "thread": "nvmf_tgt_poll_group_000", 00:19:49.887 "listen_address": { 00:19:49.887 "trtype": "TCP", 00:19:49.887 "adrfam": "IPv4", 00:19:49.887 "traddr": "10.0.0.2", 00:19:49.887 "trsvcid": "4420" 00:19:49.887 }, 00:19:49.887 "peer_address": { 00:19:49.887 "trtype": "TCP", 00:19:49.887 "adrfam": "IPv4", 00:19:49.887 "traddr": "10.0.0.1", 00:19:49.887 "trsvcid": "59914" 00:19:49.887 }, 00:19:49.887 "auth": { 00:19:49.887 "state": "completed", 00:19:49.887 "digest": "sha384", 00:19:49.887 "dhgroup": "ffdhe8192" 00:19:49.887 } 00:19:49.887 } 00:19:49.887 ]' 00:19:49.887 19:15:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.887 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:49.887 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:50.147 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:50.147 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:50.147 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.147 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.147 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.147 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:51.084 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.084 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:51.084 19:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.084 19:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.084 19:15:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.084 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:51.084 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:51.084 19:15:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.084 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.670 00:19:51.670 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.671 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.671 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.929 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.929 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.929 19:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.929 19:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.929 19:15:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.929 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.929 { 00:19:51.929 "cntlid": 95, 00:19:51.929 "qid": 0, 00:19:51.929 "state": "enabled", 00:19:51.929 "thread": "nvmf_tgt_poll_group_000", 00:19:51.929 "listen_address": { 00:19:51.929 "trtype": "TCP", 00:19:51.929 "adrfam": "IPv4", 00:19:51.929 "traddr": "10.0.0.2", 00:19:51.929 "trsvcid": "4420" 00:19:51.929 }, 00:19:51.929 "peer_address": { 00:19:51.929 "trtype": "TCP", 00:19:51.929 "adrfam": "IPv4", 00:19:51.929 "traddr": "10.0.0.1", 00:19:51.929 "trsvcid": "59938" 00:19:51.929 }, 00:19:51.929 "auth": { 00:19:51.929 "state": "completed", 00:19:51.929 "digest": "sha384", 00:19:51.929 "dhgroup": "ffdhe8192" 00:19:51.929 } 00:19:51.929 } 00:19:51.929 ]' 00:19:51.929 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.930 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:51.930 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.930 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.930 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.930 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.930 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.930 19:15:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.188 19:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:52.756 19:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.017 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.017 19:15:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.017 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.277 00:19:53.277 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.277 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.277 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:53.537 { 00:19:53.537 "cntlid": 97, 00:19:53.537 "qid": 0, 00:19:53.537 "state": "enabled", 00:19:53.537 "thread": "nvmf_tgt_poll_group_000", 00:19:53.537 "listen_address": { 00:19:53.537 "trtype": "TCP", 00:19:53.537 "adrfam": "IPv4", 00:19:53.537 "traddr": "10.0.0.2", 00:19:53.537 "trsvcid": "4420" 00:19:53.537 }, 00:19:53.537 "peer_address": { 00:19:53.537 "trtype": "TCP", 00:19:53.537 "adrfam": "IPv4", 00:19:53.537 "traddr": "10.0.0.1", 00:19:53.537 "trsvcid": "46446" 00:19:53.537 }, 00:19:53.537 "auth": { 00:19:53.537 "state": "completed", 00:19:53.537 "digest": "sha512", 00:19:53.537 "dhgroup": "null" 00:19:53.537 } 00:19:53.537 } 00:19:53.537 ]' 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.537 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.796 19:15:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.735 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.735 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.995 00:19:54.995 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.995 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.995 19:16:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.255 { 00:19:55.255 "cntlid": 99, 00:19:55.255 "qid": 0, 00:19:55.255 "state": "enabled", 00:19:55.255 "thread": "nvmf_tgt_poll_group_000", 00:19:55.255 "listen_address": { 00:19:55.255 "trtype": "TCP", 00:19:55.255 "adrfam": "IPv4", 00:19:55.255 "traddr": "10.0.0.2", 00:19:55.255 "trsvcid": "4420" 00:19:55.255 }, 00:19:55.255 "peer_address": { 00:19:55.255 "trtype": "TCP", 00:19:55.255 "adrfam": "IPv4", 00:19:55.255 "traddr": "10.0.0.1", 00:19:55.255 "trsvcid": "46470" 00:19:55.255 }, 00:19:55.255 "auth": { 00:19:55.255 "state": "completed", 00:19:55.255 "digest": "sha512", 00:19:55.255 "dhgroup": "null" 00:19:55.255 } 00:19:55.255 } 00:19:55.255 ]' 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.255 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.513 19:16:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:19:56.081 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.081 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:56.081 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.081 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.341 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:56.601 00:19:56.601 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.601 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.601 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.871 { 00:19:56.871 "cntlid": 101, 00:19:56.871 "qid": 0, 00:19:56.871 "state": "enabled", 00:19:56.871 "thread": "nvmf_tgt_poll_group_000", 00:19:56.871 "listen_address": { 00:19:56.871 "trtype": "TCP", 00:19:56.871 "adrfam": "IPv4", 00:19:56.871 "traddr": "10.0.0.2", 00:19:56.871 "trsvcid": "4420" 00:19:56.871 }, 00:19:56.871 "peer_address": { 00:19:56.871 "trtype": "TCP", 00:19:56.871 "adrfam": "IPv4", 00:19:56.871 "traddr": "10.0.0.1", 00:19:56.871 "trsvcid": "46500" 00:19:56.871 }, 00:19:56.871 "auth": { 00:19:56.871 "state": "completed", 00:19:56.871 "digest": "sha512", 00:19:56.871 "dhgroup": "null" 00:19:56.871 } 00:19:56.871 } 00:19:56.871 ]' 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.871 19:16:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.132 19:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:19:57.701 19:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.960 19:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:57.960 19:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.960 19:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.960 19:16:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.960 19:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.960 19:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:57.960 19:16:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.960 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.961 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.219 00:19:58.219 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.219 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.219 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.479 { 00:19:58.479 "cntlid": 103, 00:19:58.479 "qid": 0, 00:19:58.479 "state": "enabled", 00:19:58.479 "thread": "nvmf_tgt_poll_group_000", 00:19:58.479 "listen_address": { 00:19:58.479 "trtype": "TCP", 00:19:58.479 "adrfam": "IPv4", 00:19:58.479 "traddr": "10.0.0.2", 00:19:58.479 "trsvcid": "4420" 00:19:58.479 }, 00:19:58.479 "peer_address": { 00:19:58.479 "trtype": "TCP", 00:19:58.479 "adrfam": "IPv4", 00:19:58.479 "traddr": "10.0.0.1", 00:19:58.479 "trsvcid": "46538" 00:19:58.479 }, 00:19:58.479 "auth": { 00:19:58.479 "state": "completed", 00:19:58.479 "digest": "sha512", 00:19:58.479 "dhgroup": "null" 00:19:58.479 } 00:19:58.479 } 00:19:58.479 ]' 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.479 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.738 19:16:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:19:59.676 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.677 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.937 00:19:59.937 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.937 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.937 19:16:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.937 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.197 { 00:20:00.197 "cntlid": 105, 00:20:00.197 "qid": 0, 00:20:00.197 "state": "enabled", 00:20:00.197 "thread": "nvmf_tgt_poll_group_000", 00:20:00.197 "listen_address": { 00:20:00.197 "trtype": "TCP", 00:20:00.197 "adrfam": "IPv4", 00:20:00.197 "traddr": "10.0.0.2", 00:20:00.197 "trsvcid": "4420" 00:20:00.197 }, 00:20:00.197 "peer_address": { 00:20:00.197 "trtype": "TCP", 00:20:00.197 "adrfam": "IPv4", 00:20:00.197 "traddr": "10.0.0.1", 00:20:00.197 "trsvcid": "46562" 00:20:00.197 }, 00:20:00.197 "auth": { 00:20:00.197 "state": "completed", 00:20:00.197 "digest": "sha512", 00:20:00.197 "dhgroup": "ffdhe2048" 00:20:00.197 } 00:20:00.197 } 00:20:00.197 ]' 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.197 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.471 19:16:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:20:01.052 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.052 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.052 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:01.052 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.052 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.052 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.052 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:01.052 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.052 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.312 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:01.572 00:20:01.572 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:01.572 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:01.572 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.572 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.572 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.572 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.572 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.573 19:16:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.573 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.573 { 00:20:01.573 "cntlid": 107, 00:20:01.573 "qid": 0, 00:20:01.573 "state": "enabled", 00:20:01.573 "thread": "nvmf_tgt_poll_group_000", 00:20:01.573 "listen_address": { 00:20:01.573 "trtype": "TCP", 00:20:01.573 "adrfam": "IPv4", 00:20:01.573 "traddr": "10.0.0.2", 00:20:01.573 "trsvcid": "4420" 00:20:01.573 }, 00:20:01.573 "peer_address": { 00:20:01.573 "trtype": "TCP", 00:20:01.573 "adrfam": "IPv4", 00:20:01.573 "traddr": "10.0.0.1", 00:20:01.573 "trsvcid": "46584" 00:20:01.573 }, 00:20:01.573 "auth": { 00:20:01.573 "state": "completed", 00:20:01.573 "digest": "sha512", 00:20:01.573 "dhgroup": "ffdhe2048" 00:20:01.573 } 00:20:01.573 } 00:20:01.573 ]' 00:20:01.573 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.832 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.832 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.832 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.832 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.832 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.832 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.832 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.092 19:16:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:20:02.661 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.661 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:02.661 19:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.661 19:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.661 19:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.661 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.661 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.661 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.920 19:16:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.179 00:20:03.179 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.179 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.179 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.179 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.179 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.179 19:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.179 19:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.439 { 00:20:03.439 "cntlid": 109, 00:20:03.439 "qid": 0, 00:20:03.439 "state": "enabled", 00:20:03.439 "thread": "nvmf_tgt_poll_group_000", 00:20:03.439 "listen_address": { 00:20:03.439 "trtype": "TCP", 00:20:03.439 "adrfam": "IPv4", 00:20:03.439 "traddr": "10.0.0.2", 00:20:03.439 "trsvcid": "4420" 00:20:03.439 }, 00:20:03.439 "peer_address": { 00:20:03.439 "trtype": "TCP", 00:20:03.439 "adrfam": "IPv4", 00:20:03.439 "traddr": "10.0.0.1", 00:20:03.439 "trsvcid": "50754" 00:20:03.439 }, 00:20:03.439 "auth": { 00:20:03.439 "state": "completed", 00:20:03.439 "digest": "sha512", 00:20:03.439 "dhgroup": "ffdhe2048" 00:20:03.439 } 00:20:03.439 } 00:20:03.439 ]' 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.439 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.700 19:16:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:20:04.270 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.270 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:04.270 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.270 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.270 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.270 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.270 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:04.270 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.531 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.792 00:20:04.792 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.792 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.792 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.052 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.053 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.053 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.053 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.053 19:16:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.053 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.053 { 00:20:05.053 "cntlid": 111, 00:20:05.053 "qid": 0, 00:20:05.053 "state": "enabled", 00:20:05.053 "thread": "nvmf_tgt_poll_group_000", 00:20:05.053 "listen_address": { 00:20:05.053 "trtype": "TCP", 00:20:05.053 "adrfam": "IPv4", 00:20:05.053 "traddr": "10.0.0.2", 00:20:05.053 "trsvcid": "4420" 00:20:05.053 }, 00:20:05.053 "peer_address": { 00:20:05.053 "trtype": "TCP", 00:20:05.053 "adrfam": "IPv4", 00:20:05.053 "traddr": "10.0.0.1", 00:20:05.053 "trsvcid": "50786" 00:20:05.053 }, 00:20:05.053 "auth": { 00:20:05.053 "state": "completed", 00:20:05.053 "digest": "sha512", 00:20:05.053 "dhgroup": "ffdhe2048" 00:20:05.053 } 00:20:05.053 } 00:20:05.053 ]' 00:20:05.053 19:16:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.053 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.053 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.053 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.053 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.053 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.053 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.053 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.313 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:20:05.882 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.882 19:16:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:05.882 19:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.882 19:16:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.882 19:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.882 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.882 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.882 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:05.882 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:06.142 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:20:06.142 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.142 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:06.142 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:06.142 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:06.142 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.143 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.143 19:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.143 19:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.143 19:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.143 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.143 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.403 00:20:06.403 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.403 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.403 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.664 { 00:20:06.664 "cntlid": 113, 00:20:06.664 "qid": 0, 00:20:06.664 "state": "enabled", 00:20:06.664 "thread": "nvmf_tgt_poll_group_000", 00:20:06.664 "listen_address": { 00:20:06.664 "trtype": "TCP", 00:20:06.664 "adrfam": "IPv4", 00:20:06.664 "traddr": "10.0.0.2", 00:20:06.664 "trsvcid": "4420" 00:20:06.664 }, 00:20:06.664 "peer_address": { 00:20:06.664 "trtype": "TCP", 00:20:06.664 "adrfam": "IPv4", 00:20:06.664 "traddr": "10.0.0.1", 00:20:06.664 "trsvcid": "50814" 00:20:06.664 }, 00:20:06.664 "auth": { 00:20:06.664 "state": "completed", 00:20:06.664 "digest": "sha512", 00:20:06.664 "dhgroup": "ffdhe3072" 00:20:06.664 } 00:20:06.664 } 00:20:06.664 ]' 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.664 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.922 19:16:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.860 19:16:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.119 00:20:08.119 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:08.119 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:08.119 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:08.379 { 00:20:08.379 "cntlid": 115, 00:20:08.379 "qid": 0, 00:20:08.379 "state": "enabled", 00:20:08.379 "thread": "nvmf_tgt_poll_group_000", 00:20:08.379 "listen_address": { 00:20:08.379 "trtype": "TCP", 00:20:08.379 "adrfam": "IPv4", 00:20:08.379 "traddr": "10.0.0.2", 00:20:08.379 "trsvcid": "4420" 00:20:08.379 }, 00:20:08.379 "peer_address": { 00:20:08.379 "trtype": "TCP", 00:20:08.379 "adrfam": "IPv4", 00:20:08.379 "traddr": "10.0.0.1", 00:20:08.379 "trsvcid": "50848" 00:20:08.379 }, 00:20:08.379 "auth": { 00:20:08.379 "state": "completed", 00:20:08.379 "digest": "sha512", 00:20:08.379 "dhgroup": "ffdhe3072" 00:20:08.379 } 00:20:08.379 } 00:20:08.379 ]' 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.379 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.639 19:16:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:20:09.207 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.207 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:09.207 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.207 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.467 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.727 00:20:09.727 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.727 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.727 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.987 { 00:20:09.987 "cntlid": 117, 00:20:09.987 "qid": 0, 00:20:09.987 "state": "enabled", 00:20:09.987 "thread": "nvmf_tgt_poll_group_000", 00:20:09.987 "listen_address": { 00:20:09.987 "trtype": "TCP", 00:20:09.987 "adrfam": "IPv4", 00:20:09.987 "traddr": "10.0.0.2", 00:20:09.987 "trsvcid": "4420" 00:20:09.987 }, 00:20:09.987 "peer_address": { 00:20:09.987 "trtype": "TCP", 00:20:09.987 "adrfam": "IPv4", 00:20:09.987 "traddr": "10.0.0.1", 00:20:09.987 "trsvcid": "50872" 00:20:09.987 }, 00:20:09.987 "auth": { 00:20:09.987 "state": "completed", 00:20:09.987 "digest": "sha512", 00:20:09.987 "dhgroup": "ffdhe3072" 00:20:09.987 } 00:20:09.987 } 00:20:09.987 ]' 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.987 19:16:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.987 19:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:09.987 19:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.987 19:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.987 19:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.987 19:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.246 19:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:20:11.186 19:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.186 19:16:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:11.186 19:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.186 19:16:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.186 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:11.446 00:20:11.446 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.446 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.446 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.446 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.446 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.446 19:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.706 { 00:20:11.706 "cntlid": 119, 00:20:11.706 "qid": 0, 00:20:11.706 "state": "enabled", 00:20:11.706 "thread": "nvmf_tgt_poll_group_000", 00:20:11.706 "listen_address": { 00:20:11.706 "trtype": "TCP", 00:20:11.706 "adrfam": "IPv4", 00:20:11.706 "traddr": "10.0.0.2", 00:20:11.706 "trsvcid": "4420" 00:20:11.706 }, 00:20:11.706 "peer_address": { 00:20:11.706 "trtype": "TCP", 00:20:11.706 "adrfam": "IPv4", 00:20:11.706 "traddr": "10.0.0.1", 00:20:11.706 "trsvcid": "50898" 00:20:11.706 }, 00:20:11.706 "auth": { 00:20:11.706 "state": "completed", 00:20:11.706 "digest": "sha512", 00:20:11.706 "dhgroup": "ffdhe3072" 00:20:11.706 } 00:20:11.706 } 00:20:11.706 ]' 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.706 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.967 19:16:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.537 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.797 19:16:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.056 00:20:13.056 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.056 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.056 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.316 { 00:20:13.316 "cntlid": 121, 00:20:13.316 "qid": 0, 00:20:13.316 "state": "enabled", 00:20:13.316 "thread": "nvmf_tgt_poll_group_000", 00:20:13.316 "listen_address": { 00:20:13.316 "trtype": "TCP", 00:20:13.316 "adrfam": "IPv4", 00:20:13.316 "traddr": "10.0.0.2", 00:20:13.316 "trsvcid": "4420" 00:20:13.316 }, 00:20:13.316 "peer_address": { 00:20:13.316 "trtype": "TCP", 00:20:13.316 "adrfam": "IPv4", 00:20:13.316 "traddr": "10.0.0.1", 00:20:13.316 "trsvcid": "60954" 00:20:13.316 }, 00:20:13.316 "auth": { 00:20:13.316 "state": "completed", 00:20:13.316 "digest": "sha512", 00:20:13.316 "dhgroup": "ffdhe4096" 00:20:13.316 } 00:20:13.316 } 00:20:13.316 ]' 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.316 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.576 19:16:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.515 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.516 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.775 00:20:14.775 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.775 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.775 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.036 { 00:20:15.036 "cntlid": 123, 00:20:15.036 "qid": 0, 00:20:15.036 "state": "enabled", 00:20:15.036 "thread": "nvmf_tgt_poll_group_000", 00:20:15.036 "listen_address": { 00:20:15.036 "trtype": "TCP", 00:20:15.036 "adrfam": "IPv4", 00:20:15.036 "traddr": "10.0.0.2", 00:20:15.036 "trsvcid": "4420" 00:20:15.036 }, 00:20:15.036 "peer_address": { 00:20:15.036 "trtype": "TCP", 00:20:15.036 "adrfam": "IPv4", 00:20:15.036 "traddr": "10.0.0.1", 00:20:15.036 "trsvcid": "60978" 00:20:15.036 }, 00:20:15.036 "auth": { 00:20:15.036 "state": "completed", 00:20:15.036 "digest": "sha512", 00:20:15.036 "dhgroup": "ffdhe4096" 00:20:15.036 } 00:20:15.036 } 00:20:15.036 ]' 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:15.036 19:16:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:15.036 19:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.036 19:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:15.036 19:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.036 19:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.036 19:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.295 19:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:20:15.865 19:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.126 19:16:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.126 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.386 00:20:16.386 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.386 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.386 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.645 { 00:20:16.645 "cntlid": 125, 00:20:16.645 "qid": 0, 00:20:16.645 "state": "enabled", 00:20:16.645 "thread": "nvmf_tgt_poll_group_000", 00:20:16.645 "listen_address": { 00:20:16.645 "trtype": "TCP", 00:20:16.645 "adrfam": "IPv4", 00:20:16.645 "traddr": "10.0.0.2", 00:20:16.645 "trsvcid": "4420" 00:20:16.645 }, 00:20:16.645 "peer_address": { 00:20:16.645 "trtype": "TCP", 00:20:16.645 "adrfam": "IPv4", 00:20:16.645 "traddr": "10.0.0.1", 00:20:16.645 "trsvcid": "32778" 00:20:16.645 }, 00:20:16.645 "auth": { 00:20:16.645 "state": "completed", 00:20:16.645 "digest": "sha512", 00:20:16.645 "dhgroup": "ffdhe4096" 00:20:16.645 } 00:20:16.645 } 00:20:16.645 ]' 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.645 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.908 19:16:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:20:17.847 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.848 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.848 19:16:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.107 00:20:18.107 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.107 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.107 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.368 { 00:20:18.368 "cntlid": 127, 00:20:18.368 "qid": 0, 00:20:18.368 "state": "enabled", 00:20:18.368 "thread": "nvmf_tgt_poll_group_000", 00:20:18.368 "listen_address": { 00:20:18.368 "trtype": "TCP", 00:20:18.368 "adrfam": "IPv4", 00:20:18.368 "traddr": "10.0.0.2", 00:20:18.368 "trsvcid": "4420" 00:20:18.368 }, 00:20:18.368 "peer_address": { 00:20:18.368 "trtype": "TCP", 00:20:18.368 "adrfam": "IPv4", 00:20:18.368 "traddr": "10.0.0.1", 00:20:18.368 "trsvcid": "32808" 00:20:18.368 }, 00:20:18.368 "auth": { 00:20:18.368 "state": "completed", 00:20:18.368 "digest": "sha512", 00:20:18.368 "dhgroup": "ffdhe4096" 00:20:18.368 } 00:20:18.368 } 00:20:18.368 ]' 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.368 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.629 19:16:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.198 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.458 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.718 00:20:19.718 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.718 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.718 19:16:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.978 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.978 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.978 19:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.978 19:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.978 19:16:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.978 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.978 { 00:20:19.978 "cntlid": 129, 00:20:19.978 "qid": 0, 00:20:19.978 "state": "enabled", 00:20:19.978 "thread": "nvmf_tgt_poll_group_000", 00:20:19.978 "listen_address": { 00:20:19.978 "trtype": "TCP", 00:20:19.978 "adrfam": "IPv4", 00:20:19.978 "traddr": "10.0.0.2", 00:20:19.979 "trsvcid": "4420" 00:20:19.979 }, 00:20:19.979 "peer_address": { 00:20:19.979 "trtype": "TCP", 00:20:19.979 "adrfam": "IPv4", 00:20:19.979 "traddr": "10.0.0.1", 00:20:19.979 "trsvcid": "32846" 00:20:19.979 }, 00:20:19.979 "auth": { 00:20:19.979 "state": "completed", 00:20:19.979 "digest": "sha512", 00:20:19.979 "dhgroup": "ffdhe6144" 00:20:19.979 } 00:20:19.979 } 00:20:19.979 ]' 00:20:19.979 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.979 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.979 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.238 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:20.238 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.238 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.238 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.238 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.238 19:16:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.176 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.176 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.745 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.745 { 00:20:21.745 "cntlid": 131, 00:20:21.745 "qid": 0, 00:20:21.745 "state": "enabled", 00:20:21.745 "thread": "nvmf_tgt_poll_group_000", 00:20:21.745 "listen_address": { 00:20:21.745 "trtype": "TCP", 00:20:21.745 "adrfam": "IPv4", 00:20:21.745 "traddr": "10.0.0.2", 00:20:21.745 "trsvcid": "4420" 00:20:21.745 }, 00:20:21.745 "peer_address": { 00:20:21.745 "trtype": "TCP", 00:20:21.745 "adrfam": "IPv4", 00:20:21.745 "traddr": "10.0.0.1", 00:20:21.745 "trsvcid": "32870" 00:20:21.745 }, 00:20:21.745 "auth": { 00:20:21.745 "state": "completed", 00:20:21.745 "digest": "sha512", 00:20:21.745 "dhgroup": "ffdhe6144" 00:20:21.745 } 00:20:21.745 } 00:20:21.745 ]' 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.745 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.004 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.004 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.004 19:16:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.004 19:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:20:22.997 19:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.997 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.997 19:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:22.997 19:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.997 19:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.997 19:16:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.997 19:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.997 19:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.997 19:16:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.997 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.256 00:20:23.256 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.256 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.256 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.515 { 00:20:23.515 "cntlid": 133, 00:20:23.515 "qid": 0, 00:20:23.515 "state": "enabled", 00:20:23.515 "thread": "nvmf_tgt_poll_group_000", 00:20:23.515 "listen_address": { 00:20:23.515 "trtype": "TCP", 00:20:23.515 "adrfam": "IPv4", 00:20:23.515 "traddr": "10.0.0.2", 00:20:23.515 "trsvcid": "4420" 00:20:23.515 }, 00:20:23.515 "peer_address": { 00:20:23.515 "trtype": "TCP", 00:20:23.515 "adrfam": "IPv4", 00:20:23.515 "traddr": "10.0.0.1", 00:20:23.515 "trsvcid": "33030" 00:20:23.515 }, 00:20:23.515 "auth": { 00:20:23.515 "state": "completed", 00:20:23.515 "digest": "sha512", 00:20:23.515 "dhgroup": "ffdhe6144" 00:20:23.515 } 00:20:23.515 } 00:20:23.515 ]' 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.515 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.773 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.773 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.773 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.774 19:16:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.709 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.709 19:16:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.968 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.227 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.227 { 00:20:25.227 "cntlid": 135, 00:20:25.227 "qid": 0, 00:20:25.227 "state": "enabled", 00:20:25.227 "thread": "nvmf_tgt_poll_group_000", 00:20:25.227 "listen_address": { 00:20:25.227 "trtype": "TCP", 00:20:25.227 "adrfam": "IPv4", 00:20:25.227 "traddr": "10.0.0.2", 00:20:25.227 "trsvcid": "4420" 00:20:25.227 }, 00:20:25.227 "peer_address": { 00:20:25.227 "trtype": "TCP", 00:20:25.227 "adrfam": "IPv4", 00:20:25.227 "traddr": "10.0.0.1", 00:20:25.227 "trsvcid": "33054" 00:20:25.227 }, 00:20:25.228 "auth": { 00:20:25.228 "state": "completed", 00:20:25.228 "digest": "sha512", 00:20:25.228 "dhgroup": "ffdhe6144" 00:20:25.228 } 00:20:25.228 } 00:20:25.228 ]' 00:20:25.228 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.228 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.228 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.486 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.486 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.486 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.486 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.486 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.486 19:16:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.422 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:26.422 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.423 19:16:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.990 00:20:26.990 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:26.990 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:26.990 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.249 { 00:20:27.249 "cntlid": 137, 00:20:27.249 "qid": 0, 00:20:27.249 "state": "enabled", 00:20:27.249 "thread": "nvmf_tgt_poll_group_000", 00:20:27.249 "listen_address": { 00:20:27.249 "trtype": "TCP", 00:20:27.249 "adrfam": "IPv4", 00:20:27.249 "traddr": "10.0.0.2", 00:20:27.249 "trsvcid": "4420" 00:20:27.249 }, 00:20:27.249 "peer_address": { 00:20:27.249 "trtype": "TCP", 00:20:27.249 "adrfam": "IPv4", 00:20:27.249 "traddr": "10.0.0.1", 00:20:27.249 "trsvcid": "33086" 00:20:27.249 }, 00:20:27.249 "auth": { 00:20:27.249 "state": "completed", 00:20:27.249 "digest": "sha512", 00:20:27.249 "dhgroup": "ffdhe8192" 00:20:27.249 } 00:20:27.249 } 00:20:27.249 ]' 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.249 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.523 19:16:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.461 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.461 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:29.028 00:20:29.028 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.028 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.028 19:16:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.028 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.028 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.028 19:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.028 19:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.028 19:16:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.028 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.028 { 00:20:29.028 "cntlid": 139, 00:20:29.028 "qid": 0, 00:20:29.028 "state": "enabled", 00:20:29.028 "thread": "nvmf_tgt_poll_group_000", 00:20:29.028 "listen_address": { 00:20:29.028 "trtype": "TCP", 00:20:29.028 "adrfam": "IPv4", 00:20:29.028 "traddr": "10.0.0.2", 00:20:29.028 "trsvcid": "4420" 00:20:29.028 }, 00:20:29.028 "peer_address": { 00:20:29.028 "trtype": "TCP", 00:20:29.028 "adrfam": "IPv4", 00:20:29.028 "traddr": "10.0.0.1", 00:20:29.028 "trsvcid": "33112" 00:20:29.028 }, 00:20:29.028 "auth": { 00:20:29.028 "state": "completed", 00:20:29.028 "digest": "sha512", 00:20:29.028 "dhgroup": "ffdhe8192" 00:20:29.028 } 00:20:29.028 } 00:20:29.028 ]' 00:20:29.028 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:29.287 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.287 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:29.287 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:29.287 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:29.287 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.287 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.288 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.547 19:16:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:01:YWYwOTY5MTI1YjNhNzcyNWFiY2NlN2Q1NTYxY2UyMWVatocV: --dhchap-ctrl-secret DHHC-1:02:MGVmYzMxOTM2NDVlMDk0MjZmMGRlMDFkNDAzZmVlYWRjNjE0YjUzMWI1MDA2Y2MwyL1WNQ==: 00:20:30.116 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.116 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:30.116 19:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.116 19:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.116 19:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.116 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:30.116 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.116 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:30.375 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.376 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.945 00:20:30.945 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.945 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.945 19:16:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.945 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.945 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.945 19:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.945 19:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.945 19:16:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.945 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.945 { 00:20:30.945 "cntlid": 141, 00:20:30.945 "qid": 0, 00:20:30.945 "state": "enabled", 00:20:30.945 "thread": "nvmf_tgt_poll_group_000", 00:20:30.945 "listen_address": { 00:20:30.945 "trtype": "TCP", 00:20:30.945 "adrfam": "IPv4", 00:20:30.945 "traddr": "10.0.0.2", 00:20:30.945 "trsvcid": "4420" 00:20:30.945 }, 00:20:30.945 "peer_address": { 00:20:30.945 "trtype": "TCP", 00:20:30.945 "adrfam": "IPv4", 00:20:30.945 "traddr": "10.0.0.1", 00:20:30.945 "trsvcid": "33138" 00:20:30.945 }, 00:20:30.945 "auth": { 00:20:30.945 "state": "completed", 00:20:30.945 "digest": "sha512", 00:20:30.945 "dhgroup": "ffdhe8192" 00:20:30.945 } 00:20:30.945 } 00:20:30.945 ]' 00:20:30.945 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:31.205 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.205 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:31.205 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:31.205 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:31.205 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.205 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.205 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.464 19:16:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:02:NGYzMDZjZTg3MmQ5OWQ3NzQ1MDlhMDNjNmIxYTI1NzZmYzU2MjcxNzI2NTAwNmYwwDZaNg==: --dhchap-ctrl-secret DHHC-1:01:MTdiYjE1MGE1YTc3MTM4ODdjOGVkN2M3MzcxNWZjZDLyX17G: 00:20:32.033 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.033 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.033 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:32.033 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.033 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.033 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.034 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.034 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:32.034 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.293 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:32.862 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.862 { 00:20:32.862 "cntlid": 143, 00:20:32.862 "qid": 0, 00:20:32.862 "state": "enabled", 00:20:32.862 "thread": "nvmf_tgt_poll_group_000", 00:20:32.862 "listen_address": { 00:20:32.862 "trtype": "TCP", 00:20:32.862 "adrfam": "IPv4", 00:20:32.862 "traddr": "10.0.0.2", 00:20:32.862 "trsvcid": "4420" 00:20:32.862 }, 00:20:32.862 "peer_address": { 00:20:32.862 "trtype": "TCP", 00:20:32.862 "adrfam": "IPv4", 00:20:32.862 "traddr": "10.0.0.1", 00:20:32.862 "trsvcid": "46742" 00:20:32.862 }, 00:20:32.862 "auth": { 00:20:32.862 "state": "completed", 00:20:32.862 "digest": "sha512", 00:20:32.862 "dhgroup": "ffdhe8192" 00:20:32.862 } 00:20:32.862 } 00:20:32.862 ]' 00:20:32.862 19:16:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:33.122 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.122 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.122 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.122 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.122 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.122 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.122 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.381 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:20:33.950 19:16:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:33.950 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.212 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.783 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.783 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.783 { 00:20:34.783 "cntlid": 145, 00:20:34.783 "qid": 0, 00:20:34.783 "state": "enabled", 00:20:34.783 "thread": "nvmf_tgt_poll_group_000", 00:20:34.783 "listen_address": { 00:20:34.784 "trtype": "TCP", 00:20:34.784 "adrfam": "IPv4", 00:20:34.784 "traddr": "10.0.0.2", 00:20:34.784 "trsvcid": "4420" 00:20:34.784 }, 00:20:34.784 "peer_address": { 00:20:34.784 "trtype": "TCP", 00:20:34.784 "adrfam": "IPv4", 00:20:34.784 "traddr": "10.0.0.1", 00:20:34.784 "trsvcid": "46764" 00:20:34.784 }, 00:20:34.784 "auth": { 00:20:34.784 "state": "completed", 00:20:34.784 "digest": "sha512", 00:20:34.784 "dhgroup": "ffdhe8192" 00:20:34.784 } 00:20:34.784 } 00:20:34.784 ]' 00:20:34.784 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.044 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:35.044 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.044 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:35.044 19:16:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.044 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.044 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.044 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.305 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:00:ZjBmNmI2MWUyZWUyMjc2MDBkYTJjYjc5ZTI1ZTVlZGFmMTEyOTU0YmI3MGI2ZGMxfU2RoQ==: --dhchap-ctrl-secret DHHC-1:03:YTcwNGM4NDMwYTkzMzg0ZTk4MjBkNjZlNmIxOGQzODY5NjAwOGVlZjVjMTYwZmQzZDdmOWVhZGJiOTRjMmFkNkM+AW4=: 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.875 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:35.875 19:16:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:36.446 request: 00:20:36.446 { 00:20:36.446 "name": "nvme0", 00:20:36.446 "trtype": "tcp", 00:20:36.446 "traddr": "10.0.0.2", 00:20:36.446 "adrfam": "ipv4", 00:20:36.446 "trsvcid": "4420", 00:20:36.446 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:36.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:36.446 "prchk_reftag": false, 00:20:36.446 "prchk_guard": false, 00:20:36.446 "hdgst": false, 00:20:36.446 "ddgst": false, 00:20:36.446 "dhchap_key": "key2", 00:20:36.446 "method": "bdev_nvme_attach_controller", 00:20:36.446 "req_id": 1 00:20:36.446 } 00:20:36.446 Got JSON-RPC error response 00:20:36.446 response: 00:20:36.446 { 00:20:36.446 "code": -5, 00:20:36.446 "message": "Input/output error" 00:20:36.446 } 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.446 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:36.447 19:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:37.018 request: 00:20:37.018 { 00:20:37.018 "name": "nvme0", 00:20:37.018 "trtype": "tcp", 00:20:37.018 "traddr": "10.0.0.2", 00:20:37.018 "adrfam": "ipv4", 00:20:37.018 "trsvcid": "4420", 00:20:37.018 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:37.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:37.018 "prchk_reftag": false, 00:20:37.018 "prchk_guard": false, 00:20:37.018 "hdgst": false, 00:20:37.018 "ddgst": false, 00:20:37.018 "dhchap_key": "key1", 00:20:37.018 "dhchap_ctrlr_key": "ckey2", 00:20:37.018 "method": "bdev_nvme_attach_controller", 00:20:37.018 "req_id": 1 00:20:37.018 } 00:20:37.018 Got JSON-RPC error response 00:20:37.018 response: 00:20:37.018 { 00:20:37.018 "code": -5, 00:20:37.018 "message": "Input/output error" 00:20:37.018 } 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key1 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.018 19:16:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:37.588 request: 00:20:37.588 { 00:20:37.588 "name": "nvme0", 00:20:37.588 "trtype": "tcp", 00:20:37.588 "traddr": "10.0.0.2", 00:20:37.588 "adrfam": "ipv4", 00:20:37.588 "trsvcid": "4420", 00:20:37.588 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:37.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:37.588 "prchk_reftag": false, 00:20:37.588 "prchk_guard": false, 00:20:37.588 "hdgst": false, 00:20:37.588 "ddgst": false, 00:20:37.588 "dhchap_key": "key1", 00:20:37.588 "dhchap_ctrlr_key": "ckey1", 00:20:37.588 "method": "bdev_nvme_attach_controller", 00:20:37.588 "req_id": 1 00:20:37.588 } 00:20:37.588 Got JSON-RPC error response 00:20:37.588 response: 00:20:37.588 { 00:20:37.588 "code": -5, 00:20:37.588 "message": "Input/output error" 00:20:37.588 } 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1423886 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1423886 ']' 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1423886 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1423886 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1423886' 00:20:37.588 killing process with pid 1423886 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1423886 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1423886 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1450825 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1450825 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1450825 ']' 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:37.588 19:16:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1450825 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1450825 ']' 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.530 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:38.791 19:16:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:39.362 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.362 { 00:20:39.362 "cntlid": 1, 00:20:39.362 "qid": 0, 00:20:39.362 "state": "enabled", 00:20:39.362 "thread": "nvmf_tgt_poll_group_000", 00:20:39.362 "listen_address": { 00:20:39.362 "trtype": "TCP", 00:20:39.362 "adrfam": "IPv4", 00:20:39.362 "traddr": "10.0.0.2", 00:20:39.362 "trsvcid": "4420" 00:20:39.362 }, 00:20:39.362 "peer_address": { 00:20:39.362 "trtype": "TCP", 00:20:39.362 "adrfam": "IPv4", 00:20:39.362 "traddr": "10.0.0.1", 00:20:39.362 "trsvcid": "46844" 00:20:39.362 }, 00:20:39.362 "auth": { 00:20:39.362 "state": "completed", 00:20:39.362 "digest": "sha512", 00:20:39.362 "dhgroup": "ffdhe8192" 00:20:39.362 } 00:20:39.362 } 00:20:39.362 ]' 00:20:39.362 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.623 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.623 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.623 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.623 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.623 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.623 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.623 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.623 19:16:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid 00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-secret DHHC-1:03:MDc5NWRhNTNhMDRjMWJlNzM0OGMwNmE0YzlhOTc5MGNjY2FiYzcxNDU4NzY4NDQwYjk3NTNhYzY4MTQ4ZDI0N2lzZJQ=: 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --dhchap-key key3 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.565 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:40.825 request: 00:20:40.825 { 00:20:40.825 "name": "nvme0", 00:20:40.825 "trtype": "tcp", 00:20:40.825 "traddr": "10.0.0.2", 00:20:40.825 "adrfam": "ipv4", 00:20:40.825 "trsvcid": "4420", 00:20:40.825 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:40.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:40.825 "prchk_reftag": false, 00:20:40.825 "prchk_guard": false, 00:20:40.825 "hdgst": false, 00:20:40.825 "ddgst": false, 00:20:40.825 "dhchap_key": "key3", 00:20:40.825 "method": "bdev_nvme_attach_controller", 00:20:40.825 "req_id": 1 00:20:40.825 } 00:20:40.825 Got JSON-RPC error response 00:20:40.825 response: 00:20:40.825 { 00:20:40.825 "code": -5, 00:20:40.825 "message": "Input/output error" 00:20:40.825 } 00:20:40.825 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:40.825 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:40.825 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:40.825 19:16:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:40.825 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:40.825 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:40.825 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:40.825 19:16:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:41.086 request: 00:20:41.086 { 00:20:41.086 "name": "nvme0", 00:20:41.086 "trtype": "tcp", 00:20:41.086 "traddr": "10.0.0.2", 00:20:41.086 "adrfam": "ipv4", 00:20:41.086 "trsvcid": "4420", 00:20:41.086 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:41.086 "prchk_reftag": false, 00:20:41.086 "prchk_guard": false, 00:20:41.086 "hdgst": false, 00:20:41.086 "ddgst": false, 00:20:41.086 "dhchap_key": "key3", 00:20:41.086 "method": "bdev_nvme_attach_controller", 00:20:41.086 "req_id": 1 00:20:41.086 } 00:20:41.086 Got JSON-RPC error response 00:20:41.086 response: 00:20:41.086 { 00:20:41.086 "code": -5, 00:20:41.086 "message": "Input/output error" 00:20:41.086 } 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.086 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:41.347 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:41.608 request: 00:20:41.608 { 00:20:41.608 "name": "nvme0", 00:20:41.608 "trtype": "tcp", 00:20:41.608 "traddr": "10.0.0.2", 00:20:41.608 "adrfam": "ipv4", 00:20:41.608 "trsvcid": "4420", 00:20:41.608 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:41.608 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be", 00:20:41.608 "prchk_reftag": false, 00:20:41.608 "prchk_guard": false, 00:20:41.608 "hdgst": false, 00:20:41.608 "ddgst": false, 00:20:41.608 "dhchap_key": "key0", 00:20:41.608 "dhchap_ctrlr_key": "key1", 00:20:41.608 "method": "bdev_nvme_attach_controller", 00:20:41.608 "req_id": 1 00:20:41.608 } 00:20:41.608 Got JSON-RPC error response 00:20:41.608 response: 00:20:41.608 { 00:20:41.608 "code": -5, 00:20:41.608 "message": "Input/output error" 00:20:41.608 } 00:20:41.608 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:41.608 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:41.608 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:41.608 19:16:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:41.608 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:41.608 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:41.608 00:20:41.869 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:41.869 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:41.869 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.869 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.869 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.869 19:16:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1424233 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1424233 ']' 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1424233 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1424233 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1424233' 00:20:42.132 killing process with pid 1424233 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1424233 00:20:42.132 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1424233 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:42.394 rmmod nvme_tcp 00:20:42.394 rmmod nvme_fabrics 00:20:42.394 rmmod nvme_keyring 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1450825 ']' 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1450825 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1450825 ']' 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1450825 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1450825 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1450825' 00:20:42.394 killing process with pid 1450825 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1450825 00:20:42.394 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1450825 00:20:42.654 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:42.654 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:42.654 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:42.654 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:42.654 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:42.654 19:16:48 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.654 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.654 19:16:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.569 19:16:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:44.569 19:16:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.pVo /tmp/spdk.key-sha256.VOk /tmp/spdk.key-sha384.ekk /tmp/spdk.key-sha512.Odv /tmp/spdk.key-sha512.JnI /tmp/spdk.key-sha384.9qZ /tmp/spdk.key-sha256.C2p '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:44.569 00:20:44.569 real 2m24.090s 00:20:44.569 user 5m19.764s 00:20:44.569 sys 0m21.584s 00:20:44.569 19:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:44.569 19:16:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.569 ************************************ 00:20:44.569 END TEST nvmf_auth_target 00:20:44.569 ************************************ 00:20:44.569 19:16:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:44.569 19:16:50 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:44.569 19:16:50 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:44.569 19:16:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:44.569 19:16:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:44.569 19:16:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:44.830 ************************************ 00:20:44.830 START TEST nvmf_bdevio_no_huge 00:20:44.830 ************************************ 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:44.830 * Looking for test storage... 00:20:44.830 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:44.830 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:44.831 19:16:50 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.055 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.055 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:20:53.056 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:20:53.056 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:20:53.056 Found net devices under 0000:4b:00.0: cvl_0_0 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:20:53.056 Found net devices under 0000:4b:00.1: cvl_0_1 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.056 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.056 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:20:53.056 00:20:53.056 --- 10.0.0.2 ping statistics --- 00:20:53.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.056 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.056 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.056 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:20:53.056 00:20:53.056 --- 10.0.0.1 ping statistics --- 00:20:53.056 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.056 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.056 19:16:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1455875 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1455875 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1455875 ']' 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:53.056 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.056 [2024-07-12 19:16:58.061511] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:20:53.056 [2024-07-12 19:16:58.061564] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:53.057 [2024-07-12 19:16:58.149893] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.057 [2024-07-12 19:16:58.257443] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.057 [2024-07-12 19:16:58.257497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.057 [2024-07-12 19:16:58.257505] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.057 [2024-07-12 19:16:58.257513] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.057 [2024-07-12 19:16:58.257519] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.057 [2024-07-12 19:16:58.257680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:53.057 [2024-07-12 19:16:58.257839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:53.057 [2024-07-12 19:16:58.257996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.057 [2024-07-12 19:16:58.257996] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.057 [2024-07-12 19:16:58.905114] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.057 Malloc0 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:53.057 [2024-07-12 19:16:58.958738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:53.057 { 00:20:53.057 "params": { 00:20:53.057 "name": "Nvme$subsystem", 00:20:53.057 "trtype": "$TEST_TRANSPORT", 00:20:53.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:53.057 "adrfam": "ipv4", 00:20:53.057 "trsvcid": "$NVMF_PORT", 00:20:53.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:53.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:53.057 "hdgst": ${hdgst:-false}, 00:20:53.057 "ddgst": ${ddgst:-false} 00:20:53.057 }, 00:20:53.057 "method": "bdev_nvme_attach_controller" 00:20:53.057 } 00:20:53.057 EOF 00:20:53.057 )") 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:53.057 19:16:58 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:53.057 "params": { 00:20:53.057 "name": "Nvme1", 00:20:53.057 "trtype": "tcp", 00:20:53.057 "traddr": "10.0.0.2", 00:20:53.057 "adrfam": "ipv4", 00:20:53.057 "trsvcid": "4420", 00:20:53.057 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.057 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:53.057 "hdgst": false, 00:20:53.057 "ddgst": false 00:20:53.057 }, 00:20:53.057 "method": "bdev_nvme_attach_controller" 00:20:53.057 }' 00:20:53.057 [2024-07-12 19:16:59.012963] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:20:53.057 [2024-07-12 19:16:59.013037] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1456229 ] 00:20:53.057 [2024-07-12 19:16:59.082347] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:53.057 [2024-07-12 19:16:59.179155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.057 [2024-07-12 19:16:59.179224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.057 [2024-07-12 19:16:59.179227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.322 I/O targets: 00:20:53.322 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:53.322 00:20:53.322 00:20:53.322 CUnit - A unit testing framework for C - Version 2.1-3 00:20:53.322 http://cunit.sourceforge.net/ 00:20:53.322 00:20:53.322 00:20:53.322 Suite: bdevio tests on: Nvme1n1 00:20:53.322 Test: blockdev write read block ...passed 00:20:53.322 Test: blockdev write zeroes read block ...passed 00:20:53.322 Test: blockdev write zeroes read no split ...passed 00:20:53.579 Test: blockdev write zeroes read split ...passed 00:20:53.579 Test: blockdev write zeroes read split partial ...passed 00:20:53.579 Test: blockdev reset ...[2024-07-12 19:16:59.504334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:53.579 [2024-07-12 19:16:59.504395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cccc10 (9): Bad file descriptor 00:20:53.579 [2024-07-12 19:16:59.517546] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:53.579 passed 00:20:53.579 Test: blockdev write read 8 blocks ...passed 00:20:53.579 Test: blockdev write read size > 128k ...passed 00:20:53.579 Test: blockdev write read invalid size ...passed 00:20:53.579 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:53.579 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:53.579 Test: blockdev write read max offset ...passed 00:20:53.579 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:53.579 Test: blockdev writev readv 8 blocks ...passed 00:20:53.579 Test: blockdev writev readv 30 x 1block ...passed 00:20:53.836 Test: blockdev writev readv block ...passed 00:20:53.836 Test: blockdev writev readv size > 128k ...passed 00:20:53.836 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:53.836 Test: blockdev comparev and writev ...[2024-07-12 19:16:59.741397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:53.836 [2024-07-12 19:16:59.741423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.741434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:53.836 [2024-07-12 19:16:59.741440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.741881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:53.836 [2024-07-12 19:16:59.741891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.741901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:53.836 [2024-07-12 19:16:59.741907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.742352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:53.836 [2024-07-12 19:16:59.742361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.742370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:53.836 [2024-07-12 19:16:59.742375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.742809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:53.836 [2024-07-12 19:16:59.742817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.742826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:53.836 [2024-07-12 19:16:59.742831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:53.836 passed 00:20:53.836 Test: blockdev nvme passthru rw ...passed 00:20:53.836 Test: blockdev nvme passthru vendor specific ...[2024-07-12 19:16:59.827835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:53.836 [2024-07-12 19:16:59.827851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.828132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:53.836 [2024-07-12 19:16:59.828143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.828454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:53.836 [2024-07-12 19:16:59.828462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:53.836 [2024-07-12 19:16:59.828747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:53.836 [2024-07-12 19:16:59.828754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:53.836 passed 00:20:53.836 Test: blockdev nvme admin passthru ...passed 00:20:53.836 Test: blockdev copy ...passed 00:20:53.836 00:20:53.836 Run Summary: Type Total Ran Passed Failed Inactive 00:20:53.836 suites 1 1 n/a 0 0 00:20:53.836 tests 23 23 23 0 0 00:20:53.836 asserts 152 152 152 0 n/a 00:20:53.836 00:20:53.836 Elapsed time = 1.145 seconds 00:20:54.092 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.092 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.092 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:54.093 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:54.093 rmmod nvme_tcp 00:20:54.093 rmmod nvme_fabrics 00:20:54.093 rmmod nvme_keyring 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1455875 ']' 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1455875 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1455875 ']' 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1455875 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1455875 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1455875' 00:20:54.351 killing process with pid 1455875 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1455875 00:20:54.351 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1455875 00:20:54.611 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:54.611 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:54.611 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:54.611 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.611 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:54.611 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.611 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.611 19:17:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.521 19:17:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:56.521 00:20:56.521 real 0m11.867s 00:20:56.521 user 0m12.826s 00:20:56.521 sys 0m6.279s 00:20:56.521 19:17:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:56.521 19:17:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:56.521 ************************************ 00:20:56.521 END TEST nvmf_bdevio_no_huge 00:20:56.521 ************************************ 00:20:56.521 19:17:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:56.521 19:17:02 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:56.521 19:17:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:56.521 19:17:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:56.521 19:17:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:56.781 ************************************ 00:20:56.781 START TEST nvmf_tls 00:20:56.781 ************************************ 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:56.781 * Looking for test storage... 00:20:56.781 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.781 19:17:02 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:56.782 19:17:02 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.924 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:21:04.925 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:21:04.925 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:21:04.925 Found net devices under 0000:4b:00.0: cvl_0_0 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:21:04.925 Found net devices under 0000:4b:00.1: cvl_0_1 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:04.925 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.925 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.459 ms 00:21:04.925 00:21:04.925 --- 10.0.0.2 ping statistics --- 00:21:04.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.925 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:21:04.925 00:21:04.925 --- 10.0.0.1 ping statistics --- 00:21:04.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.925 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1460555 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1460555 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1460555 ']' 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:04.925 19:17:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.925 [2024-07-12 19:17:09.966624] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:04.925 [2024-07-12 19:17:09.966674] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.925 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.925 [2024-07-12 19:17:10.051958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.925 [2024-07-12 19:17:10.125354] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.925 [2024-07-12 19:17:10.125399] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.925 [2024-07-12 19:17:10.125406] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.925 [2024-07-12 19:17:10.125413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.925 [2024-07-12 19:17:10.125419] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.925 [2024-07-12 19:17:10.125443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:04.925 true 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:04.925 19:17:10 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:21:05.187 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:21:05.187 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:21:05.187 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:05.187 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:05.187 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:21:05.448 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:21:05.448 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:21:05.448 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:05.709 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:05.709 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:21:05.709 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:21:05.709 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:21:05.709 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:05.709 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:21:05.970 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:21:05.970 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:21:05.970 19:17:11 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:06.231 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:06.231 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:21:06.231 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:21:06.231 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:21:06.231 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:06.492 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:06.492 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:21:06.492 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:21:06.492 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:21:06.753 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:06.753 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.lYZPUnMQQ5 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.Z3iTg90ui5 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.lYZPUnMQQ5 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Z3iTg90ui5 00:21:06.754 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:07.016 19:17:12 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:07.016 19:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.lYZPUnMQQ5 00:21:07.016 19:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.lYZPUnMQQ5 00:21:07.016 19:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.277 [2024-07-12 19:17:13.257452] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.277 19:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:07.538 19:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:07.538 [2024-07-12 19:17:13.550152] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:07.538 [2024-07-12 19:17:13.550353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.538 19:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:07.798 malloc0 00:21:07.798 19:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:07.798 19:17:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lYZPUnMQQ5 00:21:08.059 [2024-07-12 19:17:14.005317] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:08.059 19:17:14 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.lYZPUnMQQ5 00:21:08.059 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.056 Initializing NVMe Controllers 00:21:18.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:18.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:18.056 Initialization complete. Launching workers. 00:21:18.056 ======================================================== 00:21:18.056 Latency(us) 00:21:18.056 Device Information : IOPS MiB/s Average min max 00:21:18.056 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19109.08 74.64 3349.22 961.80 5730.44 00:21:18.056 ======================================================== 00:21:18.057 Total : 19109.08 74.64 3349.22 961.80 5730.44 00:21:18.057 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.lYZPUnMQQ5 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lYZPUnMQQ5' 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1463293 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1463293 /var/tmp/bdevperf.sock 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1463293 ']' 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.057 19:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.057 [2024-07-12 19:17:24.166996] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:18.057 [2024-07-12 19:17:24.167052] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1463293 ] 00:21:18.317 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.317 [2024-07-12 19:17:24.215626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.317 [2024-07-12 19:17:24.268059] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.888 19:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.888 19:17:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:18.888 19:17:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lYZPUnMQQ5 00:21:19.148 [2024-07-12 19:17:25.060846] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.148 [2024-07-12 19:17:25.060903] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:19.148 TLSTESTn1 00:21:19.148 19:17:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:19.148 Running I/O for 10 seconds... 00:21:31.374 00:21:31.374 Latency(us) 00:21:31.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.374 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:31.374 Verification LBA range: start 0x0 length 0x2000 00:21:31.374 TLSTESTn1 : 10.06 3272.40 12.78 0.00 0.00 38995.20 6171.31 110100.48 00:21:31.374 =================================================================================================================== 00:21:31.374 Total : 3272.40 12.78 0.00 0.00 38995.20 6171.31 110100.48 00:21:31.374 0 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1463293 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1463293 ']' 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1463293 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1463293 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1463293' 00:21:31.374 killing process with pid 1463293 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1463293 00:21:31.374 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.374 00:21:31.374 Latency(us) 00:21:31.374 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.374 =================================================================================================================== 00:21:31.374 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:31.374 [2024-07-12 19:17:35.392430] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1463293 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z3iTg90ui5 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z3iTg90ui5 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Z3iTg90ui5 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Z3iTg90ui5' 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465611 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465611 /var/tmp/bdevperf.sock 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465611 ']' 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.374 19:17:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.374 [2024-07-12 19:17:35.555164] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:31.374 [2024-07-12 19:17:35.555218] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465611 ] 00:21:31.374 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.374 [2024-07-12 19:17:35.605541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.374 [2024-07-12 19:17:35.657162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.374 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.374 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:31.374 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Z3iTg90ui5 00:21:31.374 [2024-07-12 19:17:36.470475] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.374 [2024-07-12 19:17:36.470531] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:31.375 [2024-07-12 19:17:36.480567] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:31.375 [2024-07-12 19:17:36.480640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147fec0 (107): Transport endpoint is not connected 00:21:31.375 [2024-07-12 19:17:36.481608] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x147fec0 (9): Bad file descriptor 00:21:31.375 [2024-07-12 19:17:36.482612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:31.375 [2024-07-12 19:17:36.482619] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:31.375 [2024-07-12 19:17:36.482626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:31.375 request: 00:21:31.375 { 00:21:31.375 "name": "TLSTEST", 00:21:31.375 "trtype": "tcp", 00:21:31.375 "traddr": "10.0.0.2", 00:21:31.375 "adrfam": "ipv4", 00:21:31.375 "trsvcid": "4420", 00:21:31.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:31.375 "prchk_reftag": false, 00:21:31.375 "prchk_guard": false, 00:21:31.375 "hdgst": false, 00:21:31.375 "ddgst": false, 00:21:31.375 "psk": "/tmp/tmp.Z3iTg90ui5", 00:21:31.375 "method": "bdev_nvme_attach_controller", 00:21:31.375 "req_id": 1 00:21:31.375 } 00:21:31.375 Got JSON-RPC error response 00:21:31.375 response: 00:21:31.375 { 00:21:31.375 "code": -5, 00:21:31.375 "message": "Input/output error" 00:21:31.375 } 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1465611 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465611 ']' 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465611 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465611 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465611' 00:21:31.375 killing process with pid 1465611 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465611 00:21:31.375 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.375 00:21:31.375 Latency(us) 00:21:31.375 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.375 =================================================================================================================== 00:21:31.375 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:31.375 [2024-07-12 19:17:36.569555] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465611 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lYZPUnMQQ5 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lYZPUnMQQ5 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.lYZPUnMQQ5 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lYZPUnMQQ5' 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465680 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465680 /var/tmp/bdevperf.sock 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465680 ']' 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.375 19:17:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.375 [2024-07-12 19:17:36.738496] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:31.375 [2024-07-12 19:17:36.738562] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465680 ] 00:21:31.375 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.375 [2024-07-12 19:17:36.788950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.375 [2024-07-12 19:17:36.841219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.375 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:31.375 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:31.375 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.lYZPUnMQQ5 00:21:31.636 [2024-07-12 19:17:37.642010] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:31.636 [2024-07-12 19:17:37.642075] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:31.636 [2024-07-12 19:17:37.653337] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:31.636 [2024-07-12 19:17:37.653357] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:31.636 [2024-07-12 19:17:37.653376] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:31.636 [2024-07-12 19:17:37.654344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223fec0 (107): Transport endpoint is not connected 00:21:31.636 [2024-07-12 19:17:37.655337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223fec0 (9): Bad file descriptor 00:21:31.636 [2024-07-12 19:17:37.656339] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:31.636 [2024-07-12 19:17:37.656345] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:31.636 [2024-07-12 19:17:37.656353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:31.636 request: 00:21:31.636 { 00:21:31.636 "name": "TLSTEST", 00:21:31.636 "trtype": "tcp", 00:21:31.636 "traddr": "10.0.0.2", 00:21:31.636 "adrfam": "ipv4", 00:21:31.636 "trsvcid": "4420", 00:21:31.636 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:31.636 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:31.636 "prchk_reftag": false, 00:21:31.636 "prchk_guard": false, 00:21:31.636 "hdgst": false, 00:21:31.636 "ddgst": false, 00:21:31.636 "psk": "/tmp/tmp.lYZPUnMQQ5", 00:21:31.636 "method": "bdev_nvme_attach_controller", 00:21:31.636 "req_id": 1 00:21:31.636 } 00:21:31.636 Got JSON-RPC error response 00:21:31.636 response: 00:21:31.636 { 00:21:31.636 "code": -5, 00:21:31.636 "message": "Input/output error" 00:21:31.636 } 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1465680 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465680 ']' 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465680 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465680 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465680' 00:21:31.636 killing process with pid 1465680 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465680 00:21:31.636 Received shutdown signal, test time was about 10.000000 seconds 00:21:31.636 00:21:31.636 Latency(us) 00:21:31.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.636 =================================================================================================================== 00:21:31.636 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:31.636 [2024-07-12 19:17:37.742997] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:31.636 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465680 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lYZPUnMQQ5 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lYZPUnMQQ5 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.lYZPUnMQQ5 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.lYZPUnMQQ5' 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1465984 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1465984 /var/tmp/bdevperf.sock 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1465984 ']' 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:31.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:31.897 19:17:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.897 [2024-07-12 19:17:37.897172] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:31.897 [2024-07-12 19:17:37.897230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1465984 ] 00:21:31.897 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.897 [2024-07-12 19:17:37.946092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.897 [2024-07-12 19:17:37.997460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:32.840 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:32.840 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:32.840 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.lYZPUnMQQ5 00:21:32.840 [2024-07-12 19:17:38.818076] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.840 [2024-07-12 19:17:38.818145] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:32.840 [2024-07-12 19:17:38.826194] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:32.840 [2024-07-12 19:17:38.826216] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:32.840 [2024-07-12 19:17:38.826235] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:32.840 [2024-07-12 19:17:38.827091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227cec0 (107): Transport endpoint is not connected 00:21:32.840 [2024-07-12 19:17:38.828084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x227cec0 (9): Bad file descriptor 00:21:32.840 [2024-07-12 19:17:38.829087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:32.841 [2024-07-12 19:17:38.829094] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:32.841 [2024-07-12 19:17:38.829102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:32.841 request: 00:21:32.841 { 00:21:32.841 "name": "TLSTEST", 00:21:32.841 "trtype": "tcp", 00:21:32.841 "traddr": "10.0.0.2", 00:21:32.841 "adrfam": "ipv4", 00:21:32.841 "trsvcid": "4420", 00:21:32.841 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:32.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:32.841 "prchk_reftag": false, 00:21:32.841 "prchk_guard": false, 00:21:32.841 "hdgst": false, 00:21:32.841 "ddgst": false, 00:21:32.841 "psk": "/tmp/tmp.lYZPUnMQQ5", 00:21:32.841 "method": "bdev_nvme_attach_controller", 00:21:32.841 "req_id": 1 00:21:32.841 } 00:21:32.841 Got JSON-RPC error response 00:21:32.841 response: 00:21:32.841 { 00:21:32.841 "code": -5, 00:21:32.841 "message": "Input/output error" 00:21:32.841 } 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1465984 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1465984 ']' 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1465984 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1465984 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1465984' 00:21:32.841 killing process with pid 1465984 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1465984 00:21:32.841 Received shutdown signal, test time was about 10.000000 seconds 00:21:32.841 00:21:32.841 Latency(us) 00:21:32.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:32.841 =================================================================================================================== 00:21:32.841 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:32.841 [2024-07-12 19:17:38.891069] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:32.841 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1465984 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1466326 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1466326 /var/tmp/bdevperf.sock 00:21:33.178 19:17:38 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:33.178 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1466326 ']' 00:21:33.178 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:33.178 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:33.178 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:33.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:33.178 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:33.178 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.178 [2024-07-12 19:17:39.048183] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:33.178 [2024-07-12 19:17:39.048238] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466326 ] 00:21:33.178 EAL: No free 2048 kB hugepages reported on node 1 00:21:33.178 [2024-07-12 19:17:39.098263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.178 [2024-07-12 19:17:39.149527] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.749 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:33.749 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:33.749 19:17:39 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:34.010 [2024-07-12 19:17:39.969009] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:34.010 [2024-07-12 19:17:39.970887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ee4a0 (9): Bad file descriptor 00:21:34.010 [2024-07-12 19:17:39.971886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:34.010 [2024-07-12 19:17:39.971894] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:34.010 [2024-07-12 19:17:39.971901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.010 request: 00:21:34.010 { 00:21:34.010 "name": "TLSTEST", 00:21:34.010 "trtype": "tcp", 00:21:34.010 "traddr": "10.0.0.2", 00:21:34.010 "adrfam": "ipv4", 00:21:34.010 "trsvcid": "4420", 00:21:34.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.010 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.010 "prchk_reftag": false, 00:21:34.010 "prchk_guard": false, 00:21:34.010 "hdgst": false, 00:21:34.010 "ddgst": false, 00:21:34.010 "method": "bdev_nvme_attach_controller", 00:21:34.010 "req_id": 1 00:21:34.010 } 00:21:34.010 Got JSON-RPC error response 00:21:34.010 response: 00:21:34.010 { 00:21:34.010 "code": -5, 00:21:34.010 "message": "Input/output error" 00:21:34.010 } 00:21:34.010 19:17:39 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1466326 00:21:34.010 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1466326 ']' 00:21:34.010 19:17:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1466326 00:21:34.010 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:34.010 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.010 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1466326 00:21:34.010 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:34.010 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:34.010 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1466326' 00:21:34.010 killing process with pid 1466326 00:21:34.010 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1466326 00:21:34.010 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.010 00:21:34.010 Latency(us) 00:21:34.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.010 =================================================================================================================== 00:21:34.010 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:34.010 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1466326 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1460555 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1460555 ']' 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1460555 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1460555 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1460555' 00:21:34.271 killing process with pid 1460555 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1460555 00:21:34.271 [2024-07-12 19:17:40.221682] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1460555 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.9cDEgfiRJN 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.9cDEgfiRJN 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.271 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1466533 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1466533 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1466533 ']' 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.532 19:17:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.532 [2024-07-12 19:17:40.456099] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:34.532 [2024-07-12 19:17:40.456165] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.532 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.532 [2024-07-12 19:17:40.538698] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.532 [2024-07-12 19:17:40.595397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.532 [2024-07-12 19:17:40.595428] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.532 [2024-07-12 19:17:40.595434] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.532 [2024-07-12 19:17:40.595439] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.532 [2024-07-12 19:17:40.595443] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.532 [2024-07-12 19:17:40.595459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.103 19:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.103 19:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:35.103 19:17:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.103 19:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.103 19:17:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.364 19:17:41 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.364 19:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.9cDEgfiRJN 00:21:35.364 19:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9cDEgfiRJN 00:21:35.364 19:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:35.364 [2024-07-12 19:17:41.401534] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.364 19:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:35.624 19:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:35.624 [2024-07-12 19:17:41.698253] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:35.624 [2024-07-12 19:17:41.698442] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.624 19:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:35.885 malloc0 00:21:35.885 19:17:41 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:35.885 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9cDEgfiRJN 00:21:36.146 [2024-07-12 19:17:42.149335] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9cDEgfiRJN 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9cDEgfiRJN' 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1466895 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1466895 /var/tmp/bdevperf.sock 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1466895 ']' 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:36.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:36.146 19:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.146 [2024-07-12 19:17:42.195208] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:36.146 [2024-07-12 19:17:42.195258] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1466895 ] 00:21:36.146 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.146 [2024-07-12 19:17:42.244400] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.406 [2024-07-12 19:17:42.296633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:36.407 19:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.407 19:17:42 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:36.407 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9cDEgfiRJN 00:21:36.407 [2024-07-12 19:17:42.508017] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.407 [2024-07-12 19:17:42.508075] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:36.668 TLSTESTn1 00:21:36.668 19:17:42 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:36.668 Running I/O for 10 seconds... 00:21:46.668 00:21:46.668 Latency(us) 00:21:46.668 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.668 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:46.668 Verification LBA range: start 0x0 length 0x2000 00:21:46.668 TLSTESTn1 : 10.02 3299.00 12.89 0.00 0.00 38742.01 4696.75 115343.36 00:21:46.668 =================================================================================================================== 00:21:46.668 Total : 3299.00 12.89 0.00 0.00 38742.01 4696.75 115343.36 00:21:46.668 0 00:21:46.668 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:46.668 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1466895 00:21:46.668 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1466895 ']' 00:21:46.668 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1466895 00:21:46.668 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:46.668 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.668 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1466895 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1466895' 00:21:46.935 killing process with pid 1466895 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1466895 00:21:46.935 Received shutdown signal, test time was about 10.000000 seconds 00:21:46.935 00:21:46.935 Latency(us) 00:21:46.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.935 =================================================================================================================== 00:21:46.935 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:46.935 [2024-07-12 19:17:52.817527] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1466895 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.9cDEgfiRJN 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9cDEgfiRJN 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9cDEgfiRJN 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.9cDEgfiRJN 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.9cDEgfiRJN' 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1469053 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1469053 /var/tmp/bdevperf.sock 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1469053 ']' 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:46.935 19:17:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.935 [2024-07-12 19:17:52.987618] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:46.935 [2024-07-12 19:17:52.987674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469053 ] 00:21:46.935 EAL: No free 2048 kB hugepages reported on node 1 00:21:46.935 [2024-07-12 19:17:53.036158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.195 [2024-07-12 19:17:53.087747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:47.766 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:47.766 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:47.766 19:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9cDEgfiRJN 00:21:48.027 [2024-07-12 19:17:53.900746] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:48.027 [2024-07-12 19:17:53.900780] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:48.027 [2024-07-12 19:17:53.900786] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.9cDEgfiRJN 00:21:48.027 request: 00:21:48.027 { 00:21:48.027 "name": "TLSTEST", 00:21:48.027 "trtype": "tcp", 00:21:48.027 "traddr": "10.0.0.2", 00:21:48.027 "adrfam": "ipv4", 00:21:48.027 "trsvcid": "4420", 00:21:48.027 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:48.027 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:48.027 "prchk_reftag": false, 00:21:48.027 "prchk_guard": false, 00:21:48.027 "hdgst": false, 00:21:48.027 "ddgst": false, 00:21:48.027 "psk": "/tmp/tmp.9cDEgfiRJN", 00:21:48.027 "method": "bdev_nvme_attach_controller", 00:21:48.027 "req_id": 1 00:21:48.027 } 00:21:48.027 Got JSON-RPC error response 00:21:48.027 response: 00:21:48.027 { 00:21:48.027 "code": -1, 00:21:48.027 "message": "Operation not permitted" 00:21:48.027 } 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1469053 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1469053 ']' 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1469053 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469053 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469053' 00:21:48.027 killing process with pid 1469053 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1469053 00:21:48.027 Received shutdown signal, test time was about 10.000000 seconds 00:21:48.027 00:21:48.027 Latency(us) 00:21:48.027 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:48.027 =================================================================================================================== 00:21:48.027 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:48.027 19:17:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1469053 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1466533 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1466533 ']' 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1466533 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1466533 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1466533' 00:21:48.027 killing process with pid 1466533 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1466533 00:21:48.027 [2024-07-12 19:17:54.149518] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:48.027 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1466533 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1469230 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1469230 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1469230 ']' 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:48.288 19:17:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:48.288 [2024-07-12 19:17:54.329966] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:48.288 [2024-07-12 19:17:54.330021] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.288 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.288 [2024-07-12 19:17:54.412731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.549 [2024-07-12 19:17:54.466974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.549 [2024-07-12 19:17:54.467006] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.549 [2024-07-12 19:17:54.467015] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.549 [2024-07-12 19:17:54.467019] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.549 [2024-07-12 19:17:54.467023] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.549 [2024-07-12 19:17:54.467043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.120 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:49.120 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:49.120 19:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.120 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.9cDEgfiRJN 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.9cDEgfiRJN 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.9cDEgfiRJN 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9cDEgfiRJN 00:21:49.121 19:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:49.381 [2024-07-12 19:17:55.264932] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.381 19:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:49.381 19:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:49.642 [2024-07-12 19:17:55.573684] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.642 [2024-07-12 19:17:55.573889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.642 19:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:49.642 malloc0 00:21:49.642 19:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:49.902 19:17:55 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9cDEgfiRJN 00:21:50.162 [2024-07-12 19:17:56.036774] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:50.162 [2024-07-12 19:17:56.036793] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:50.162 [2024-07-12 19:17:56.036814] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:50.162 request: 00:21:50.162 { 00:21:50.162 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:50.162 "host": "nqn.2016-06.io.spdk:host1", 00:21:50.162 "psk": "/tmp/tmp.9cDEgfiRJN", 00:21:50.162 "method": "nvmf_subsystem_add_host", 00:21:50.162 "req_id": 1 00:21:50.162 } 00:21:50.162 Got JSON-RPC error response 00:21:50.162 response: 00:21:50.162 { 00:21:50.162 "code": -32603, 00:21:50.162 "message": "Internal error" 00:21:50.162 } 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1469230 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1469230 ']' 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1469230 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469230 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469230' 00:21:50.162 killing process with pid 1469230 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1469230 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1469230 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.9cDEgfiRJN 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1469702 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1469702 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1469702 ']' 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:50.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:50.162 19:17:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.162 [2024-07-12 19:17:56.288434] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:50.162 [2024-07-12 19:17:56.288487] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.422 EAL: No free 2048 kB hugepages reported on node 1 00:21:50.422 [2024-07-12 19:17:56.367673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.422 [2024-07-12 19:17:56.420571] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.422 [2024-07-12 19:17:56.420603] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.422 [2024-07-12 19:17:56.420608] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.422 [2024-07-12 19:17:56.420613] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.422 [2024-07-12 19:17:56.420617] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.422 [2024-07-12 19:17:56.420631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.9cDEgfiRJN 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9cDEgfiRJN 00:21:50.994 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:51.255 [2024-07-12 19:17:57.210338] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.255 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:51.255 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:51.516 [2024-07-12 19:17:57.519087] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:51.516 [2024-07-12 19:17:57.519298] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:51.516 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:51.776 malloc0 00:21:51.776 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:51.776 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9cDEgfiRJN 00:21:52.037 [2024-07-12 19:17:57.970177] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1470072 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1470072 /var/tmp/bdevperf.sock 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1470072 ']' 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:52.037 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.037 19:17:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.037 [2024-07-12 19:17:58.042569] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:52.037 [2024-07-12 19:17:58.042621] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470072 ] 00:21:52.037 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.037 [2024-07-12 19:17:58.091219] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.037 [2024-07-12 19:17:58.143212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:52.978 19:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.978 19:17:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:52.979 19:17:58 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9cDEgfiRJN 00:21:52.979 [2024-07-12 19:17:58.943861] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:52.979 [2024-07-12 19:17:58.943920] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:52.979 TLSTESTn1 00:21:52.979 19:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:53.240 19:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:53.240 "subsystems": [ 00:21:53.240 { 00:21:53.240 "subsystem": "keyring", 00:21:53.240 "config": [] 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "subsystem": "iobuf", 00:21:53.240 "config": [ 00:21:53.240 { 00:21:53.240 "method": "iobuf_set_options", 00:21:53.240 "params": { 00:21:53.240 "small_pool_count": 8192, 00:21:53.240 "large_pool_count": 1024, 00:21:53.240 "small_bufsize": 8192, 00:21:53.240 "large_bufsize": 135168 00:21:53.240 } 00:21:53.240 } 00:21:53.240 ] 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "subsystem": "sock", 00:21:53.240 "config": [ 00:21:53.240 { 00:21:53.240 "method": "sock_set_default_impl", 00:21:53.240 "params": { 00:21:53.240 "impl_name": "posix" 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "sock_impl_set_options", 00:21:53.240 "params": { 00:21:53.240 "impl_name": "ssl", 00:21:53.240 "recv_buf_size": 4096, 00:21:53.240 "send_buf_size": 4096, 00:21:53.240 "enable_recv_pipe": true, 00:21:53.240 "enable_quickack": false, 00:21:53.240 "enable_placement_id": 0, 00:21:53.240 "enable_zerocopy_send_server": true, 00:21:53.240 "enable_zerocopy_send_client": false, 00:21:53.240 "zerocopy_threshold": 0, 00:21:53.240 "tls_version": 0, 00:21:53.240 "enable_ktls": false 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "sock_impl_set_options", 00:21:53.240 "params": { 00:21:53.240 "impl_name": "posix", 00:21:53.240 "recv_buf_size": 2097152, 00:21:53.240 "send_buf_size": 2097152, 00:21:53.240 "enable_recv_pipe": true, 00:21:53.240 "enable_quickack": false, 00:21:53.240 "enable_placement_id": 0, 00:21:53.240 "enable_zerocopy_send_server": true, 00:21:53.240 "enable_zerocopy_send_client": false, 00:21:53.240 "zerocopy_threshold": 0, 00:21:53.240 "tls_version": 0, 00:21:53.240 "enable_ktls": false 00:21:53.240 } 00:21:53.240 } 00:21:53.240 ] 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "subsystem": "vmd", 00:21:53.240 "config": [] 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "subsystem": "accel", 00:21:53.240 "config": [ 00:21:53.240 { 00:21:53.240 "method": "accel_set_options", 00:21:53.240 "params": { 00:21:53.240 "small_cache_size": 128, 00:21:53.240 "large_cache_size": 16, 00:21:53.240 "task_count": 2048, 00:21:53.240 "sequence_count": 2048, 00:21:53.240 "buf_count": 2048 00:21:53.240 } 00:21:53.240 } 00:21:53.240 ] 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "subsystem": "bdev", 00:21:53.240 "config": [ 00:21:53.240 { 00:21:53.240 "method": "bdev_set_options", 00:21:53.240 "params": { 00:21:53.240 "bdev_io_pool_size": 65535, 00:21:53.240 "bdev_io_cache_size": 256, 00:21:53.240 "bdev_auto_examine": true, 00:21:53.240 "iobuf_small_cache_size": 128, 00:21:53.240 "iobuf_large_cache_size": 16 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "bdev_raid_set_options", 00:21:53.240 "params": { 00:21:53.240 "process_window_size_kb": 1024 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "bdev_iscsi_set_options", 00:21:53.240 "params": { 00:21:53.240 "timeout_sec": 30 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "bdev_nvme_set_options", 00:21:53.240 "params": { 00:21:53.240 "action_on_timeout": "none", 00:21:53.240 "timeout_us": 0, 00:21:53.240 "timeout_admin_us": 0, 00:21:53.240 "keep_alive_timeout_ms": 10000, 00:21:53.240 "arbitration_burst": 0, 00:21:53.240 "low_priority_weight": 0, 00:21:53.240 "medium_priority_weight": 0, 00:21:53.240 "high_priority_weight": 0, 00:21:53.240 "nvme_adminq_poll_period_us": 10000, 00:21:53.240 "nvme_ioq_poll_period_us": 0, 00:21:53.240 "io_queue_requests": 0, 00:21:53.240 "delay_cmd_submit": true, 00:21:53.240 "transport_retry_count": 4, 00:21:53.240 "bdev_retry_count": 3, 00:21:53.240 "transport_ack_timeout": 0, 00:21:53.240 "ctrlr_loss_timeout_sec": 0, 00:21:53.240 "reconnect_delay_sec": 0, 00:21:53.240 "fast_io_fail_timeout_sec": 0, 00:21:53.240 "disable_auto_failback": false, 00:21:53.240 "generate_uuids": false, 00:21:53.240 "transport_tos": 0, 00:21:53.240 "nvme_error_stat": false, 00:21:53.240 "rdma_srq_size": 0, 00:21:53.240 "io_path_stat": false, 00:21:53.240 "allow_accel_sequence": false, 00:21:53.240 "rdma_max_cq_size": 0, 00:21:53.240 "rdma_cm_event_timeout_ms": 0, 00:21:53.240 "dhchap_digests": [ 00:21:53.240 "sha256", 00:21:53.240 "sha384", 00:21:53.240 "sha512" 00:21:53.240 ], 00:21:53.240 "dhchap_dhgroups": [ 00:21:53.240 "null", 00:21:53.240 "ffdhe2048", 00:21:53.240 "ffdhe3072", 00:21:53.240 "ffdhe4096", 00:21:53.240 "ffdhe6144", 00:21:53.240 "ffdhe8192" 00:21:53.240 ] 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "bdev_nvme_set_hotplug", 00:21:53.240 "params": { 00:21:53.240 "period_us": 100000, 00:21:53.240 "enable": false 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "bdev_malloc_create", 00:21:53.240 "params": { 00:21:53.240 "name": "malloc0", 00:21:53.240 "num_blocks": 8192, 00:21:53.240 "block_size": 4096, 00:21:53.240 "physical_block_size": 4096, 00:21:53.240 "uuid": "4375a17c-f4b0-4bc2-a4ca-c88beeabaf11", 00:21:53.240 "optimal_io_boundary": 0 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "bdev_wait_for_examine" 00:21:53.240 } 00:21:53.240 ] 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "subsystem": "nbd", 00:21:53.240 "config": [] 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "subsystem": "scheduler", 00:21:53.240 "config": [ 00:21:53.240 { 00:21:53.240 "method": "framework_set_scheduler", 00:21:53.240 "params": { 00:21:53.240 "name": "static" 00:21:53.240 } 00:21:53.240 } 00:21:53.240 ] 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "subsystem": "nvmf", 00:21:53.240 "config": [ 00:21:53.240 { 00:21:53.240 "method": "nvmf_set_config", 00:21:53.240 "params": { 00:21:53.240 "discovery_filter": "match_any", 00:21:53.240 "admin_cmd_passthru": { 00:21:53.240 "identify_ctrlr": false 00:21:53.240 } 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "nvmf_set_max_subsystems", 00:21:53.240 "params": { 00:21:53.240 "max_subsystems": 1024 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "nvmf_set_crdt", 00:21:53.240 "params": { 00:21:53.240 "crdt1": 0, 00:21:53.240 "crdt2": 0, 00:21:53.240 "crdt3": 0 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "nvmf_create_transport", 00:21:53.240 "params": { 00:21:53.240 "trtype": "TCP", 00:21:53.240 "max_queue_depth": 128, 00:21:53.240 "max_io_qpairs_per_ctrlr": 127, 00:21:53.240 "in_capsule_data_size": 4096, 00:21:53.240 "max_io_size": 131072, 00:21:53.240 "io_unit_size": 131072, 00:21:53.240 "max_aq_depth": 128, 00:21:53.240 "num_shared_buffers": 511, 00:21:53.240 "buf_cache_size": 4294967295, 00:21:53.240 "dif_insert_or_strip": false, 00:21:53.240 "zcopy": false, 00:21:53.240 "c2h_success": false, 00:21:53.240 "sock_priority": 0, 00:21:53.240 "abort_timeout_sec": 1, 00:21:53.240 "ack_timeout": 0, 00:21:53.240 "data_wr_pool_size": 0 00:21:53.240 } 00:21:53.240 }, 00:21:53.240 { 00:21:53.240 "method": "nvmf_create_subsystem", 00:21:53.240 "params": { 00:21:53.240 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.240 "allow_any_host": false, 00:21:53.240 "serial_number": "SPDK00000000000001", 00:21:53.240 "model_number": "SPDK bdev Controller", 00:21:53.241 "max_namespaces": 10, 00:21:53.241 "min_cntlid": 1, 00:21:53.241 "max_cntlid": 65519, 00:21:53.241 "ana_reporting": false 00:21:53.241 } 00:21:53.241 }, 00:21:53.241 { 00:21:53.241 "method": "nvmf_subsystem_add_host", 00:21:53.241 "params": { 00:21:53.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.241 "host": "nqn.2016-06.io.spdk:host1", 00:21:53.241 "psk": "/tmp/tmp.9cDEgfiRJN" 00:21:53.241 } 00:21:53.241 }, 00:21:53.241 { 00:21:53.241 "method": "nvmf_subsystem_add_ns", 00:21:53.241 "params": { 00:21:53.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.241 "namespace": { 00:21:53.241 "nsid": 1, 00:21:53.241 "bdev_name": "malloc0", 00:21:53.241 "nguid": "4375A17CF4B04BC2A4CAC88BEEABAF11", 00:21:53.241 "uuid": "4375a17c-f4b0-4bc2-a4ca-c88beeabaf11", 00:21:53.241 "no_auto_visible": false 00:21:53.241 } 00:21:53.241 } 00:21:53.241 }, 00:21:53.241 { 00:21:53.241 "method": "nvmf_subsystem_add_listener", 00:21:53.241 "params": { 00:21:53.241 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.241 "listen_address": { 00:21:53.241 "trtype": "TCP", 00:21:53.241 "adrfam": "IPv4", 00:21:53.241 "traddr": "10.0.0.2", 00:21:53.241 "trsvcid": "4420" 00:21:53.241 }, 00:21:53.241 "secure_channel": true 00:21:53.241 } 00:21:53.241 } 00:21:53.241 ] 00:21:53.241 } 00:21:53.241 ] 00:21:53.241 }' 00:21:53.241 19:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:53.502 19:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:53.502 "subsystems": [ 00:21:53.502 { 00:21:53.502 "subsystem": "keyring", 00:21:53.502 "config": [] 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "subsystem": "iobuf", 00:21:53.502 "config": [ 00:21:53.502 { 00:21:53.502 "method": "iobuf_set_options", 00:21:53.502 "params": { 00:21:53.502 "small_pool_count": 8192, 00:21:53.502 "large_pool_count": 1024, 00:21:53.502 "small_bufsize": 8192, 00:21:53.502 "large_bufsize": 135168 00:21:53.502 } 00:21:53.502 } 00:21:53.502 ] 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "subsystem": "sock", 00:21:53.502 "config": [ 00:21:53.502 { 00:21:53.502 "method": "sock_set_default_impl", 00:21:53.502 "params": { 00:21:53.502 "impl_name": "posix" 00:21:53.502 } 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "method": "sock_impl_set_options", 00:21:53.502 "params": { 00:21:53.502 "impl_name": "ssl", 00:21:53.502 "recv_buf_size": 4096, 00:21:53.502 "send_buf_size": 4096, 00:21:53.502 "enable_recv_pipe": true, 00:21:53.502 "enable_quickack": false, 00:21:53.502 "enable_placement_id": 0, 00:21:53.502 "enable_zerocopy_send_server": true, 00:21:53.502 "enable_zerocopy_send_client": false, 00:21:53.502 "zerocopy_threshold": 0, 00:21:53.502 "tls_version": 0, 00:21:53.502 "enable_ktls": false 00:21:53.502 } 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "method": "sock_impl_set_options", 00:21:53.502 "params": { 00:21:53.502 "impl_name": "posix", 00:21:53.502 "recv_buf_size": 2097152, 00:21:53.502 "send_buf_size": 2097152, 00:21:53.502 "enable_recv_pipe": true, 00:21:53.502 "enable_quickack": false, 00:21:53.502 "enable_placement_id": 0, 00:21:53.502 "enable_zerocopy_send_server": true, 00:21:53.502 "enable_zerocopy_send_client": false, 00:21:53.502 "zerocopy_threshold": 0, 00:21:53.502 "tls_version": 0, 00:21:53.502 "enable_ktls": false 00:21:53.502 } 00:21:53.502 } 00:21:53.502 ] 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "subsystem": "vmd", 00:21:53.502 "config": [] 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "subsystem": "accel", 00:21:53.502 "config": [ 00:21:53.502 { 00:21:53.502 "method": "accel_set_options", 00:21:53.502 "params": { 00:21:53.502 "small_cache_size": 128, 00:21:53.502 "large_cache_size": 16, 00:21:53.502 "task_count": 2048, 00:21:53.502 "sequence_count": 2048, 00:21:53.502 "buf_count": 2048 00:21:53.502 } 00:21:53.502 } 00:21:53.502 ] 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "subsystem": "bdev", 00:21:53.502 "config": [ 00:21:53.502 { 00:21:53.502 "method": "bdev_set_options", 00:21:53.502 "params": { 00:21:53.502 "bdev_io_pool_size": 65535, 00:21:53.502 "bdev_io_cache_size": 256, 00:21:53.502 "bdev_auto_examine": true, 00:21:53.502 "iobuf_small_cache_size": 128, 00:21:53.502 "iobuf_large_cache_size": 16 00:21:53.502 } 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "method": "bdev_raid_set_options", 00:21:53.502 "params": { 00:21:53.502 "process_window_size_kb": 1024 00:21:53.502 } 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "method": "bdev_iscsi_set_options", 00:21:53.502 "params": { 00:21:53.502 "timeout_sec": 30 00:21:53.502 } 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "method": "bdev_nvme_set_options", 00:21:53.502 "params": { 00:21:53.502 "action_on_timeout": "none", 00:21:53.502 "timeout_us": 0, 00:21:53.502 "timeout_admin_us": 0, 00:21:53.502 "keep_alive_timeout_ms": 10000, 00:21:53.502 "arbitration_burst": 0, 00:21:53.502 "low_priority_weight": 0, 00:21:53.502 "medium_priority_weight": 0, 00:21:53.502 "high_priority_weight": 0, 00:21:53.502 "nvme_adminq_poll_period_us": 10000, 00:21:53.502 "nvme_ioq_poll_period_us": 0, 00:21:53.502 "io_queue_requests": 512, 00:21:53.502 "delay_cmd_submit": true, 00:21:53.502 "transport_retry_count": 4, 00:21:53.502 "bdev_retry_count": 3, 00:21:53.502 "transport_ack_timeout": 0, 00:21:53.502 "ctrlr_loss_timeout_sec": 0, 00:21:53.502 "reconnect_delay_sec": 0, 00:21:53.502 "fast_io_fail_timeout_sec": 0, 00:21:53.502 "disable_auto_failback": false, 00:21:53.502 "generate_uuids": false, 00:21:53.502 "transport_tos": 0, 00:21:53.502 "nvme_error_stat": false, 00:21:53.502 "rdma_srq_size": 0, 00:21:53.502 "io_path_stat": false, 00:21:53.502 "allow_accel_sequence": false, 00:21:53.502 "rdma_max_cq_size": 0, 00:21:53.502 "rdma_cm_event_timeout_ms": 0, 00:21:53.502 "dhchap_digests": [ 00:21:53.502 "sha256", 00:21:53.502 "sha384", 00:21:53.502 "sha512" 00:21:53.502 ], 00:21:53.502 "dhchap_dhgroups": [ 00:21:53.502 "null", 00:21:53.502 "ffdhe2048", 00:21:53.502 "ffdhe3072", 00:21:53.502 "ffdhe4096", 00:21:53.502 "ffdhe6144", 00:21:53.502 "ffdhe8192" 00:21:53.502 ] 00:21:53.502 } 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "method": "bdev_nvme_attach_controller", 00:21:53.502 "params": { 00:21:53.502 "name": "TLSTEST", 00:21:53.502 "trtype": "TCP", 00:21:53.502 "adrfam": "IPv4", 00:21:53.502 "traddr": "10.0.0.2", 00:21:53.502 "trsvcid": "4420", 00:21:53.502 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.502 "prchk_reftag": false, 00:21:53.502 "prchk_guard": false, 00:21:53.502 "ctrlr_loss_timeout_sec": 0, 00:21:53.502 "reconnect_delay_sec": 0, 00:21:53.502 "fast_io_fail_timeout_sec": 0, 00:21:53.502 "psk": "/tmp/tmp.9cDEgfiRJN", 00:21:53.502 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:53.502 "hdgst": false, 00:21:53.502 "ddgst": false 00:21:53.502 } 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "method": "bdev_nvme_set_hotplug", 00:21:53.502 "params": { 00:21:53.502 "period_us": 100000, 00:21:53.502 "enable": false 00:21:53.502 } 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "method": "bdev_wait_for_examine" 00:21:53.502 } 00:21:53.502 ] 00:21:53.502 }, 00:21:53.502 { 00:21:53.502 "subsystem": "nbd", 00:21:53.502 "config": [] 00:21:53.502 } 00:21:53.502 ] 00:21:53.502 }' 00:21:53.502 19:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1470072 00:21:53.502 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1470072 ']' 00:21:53.502 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1470072 00:21:53.502 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:53.503 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.503 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1470072 00:21:53.503 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:53.503 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:53.503 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1470072' 00:21:53.503 killing process with pid 1470072 00:21:53.503 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1470072 00:21:53.503 Received shutdown signal, test time was about 10.000000 seconds 00:21:53.503 00:21:53.503 Latency(us) 00:21:53.503 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:53.503 =================================================================================================================== 00:21:53.503 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:53.503 [2024-07-12 19:17:59.570182] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:53.503 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1470072 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1469702 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1469702 ']' 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1469702 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1469702 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1469702' 00:21:53.764 killing process with pid 1469702 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1469702 00:21:53.764 [2024-07-12 19:17:59.713805] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1469702 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.764 19:17:59 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:53.764 "subsystems": [ 00:21:53.764 { 00:21:53.764 "subsystem": "keyring", 00:21:53.764 "config": [] 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "subsystem": "iobuf", 00:21:53.764 "config": [ 00:21:53.764 { 00:21:53.764 "method": "iobuf_set_options", 00:21:53.764 "params": { 00:21:53.764 "small_pool_count": 8192, 00:21:53.764 "large_pool_count": 1024, 00:21:53.764 "small_bufsize": 8192, 00:21:53.764 "large_bufsize": 135168 00:21:53.764 } 00:21:53.764 } 00:21:53.764 ] 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "subsystem": "sock", 00:21:53.764 "config": [ 00:21:53.764 { 00:21:53.764 "method": "sock_set_default_impl", 00:21:53.764 "params": { 00:21:53.764 "impl_name": "posix" 00:21:53.764 } 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "method": "sock_impl_set_options", 00:21:53.764 "params": { 00:21:53.764 "impl_name": "ssl", 00:21:53.764 "recv_buf_size": 4096, 00:21:53.764 "send_buf_size": 4096, 00:21:53.764 "enable_recv_pipe": true, 00:21:53.764 "enable_quickack": false, 00:21:53.764 "enable_placement_id": 0, 00:21:53.764 "enable_zerocopy_send_server": true, 00:21:53.764 "enable_zerocopy_send_client": false, 00:21:53.764 "zerocopy_threshold": 0, 00:21:53.764 "tls_version": 0, 00:21:53.764 "enable_ktls": false 00:21:53.764 } 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "method": "sock_impl_set_options", 00:21:53.764 "params": { 00:21:53.764 "impl_name": "posix", 00:21:53.764 "recv_buf_size": 2097152, 00:21:53.764 "send_buf_size": 2097152, 00:21:53.764 "enable_recv_pipe": true, 00:21:53.764 "enable_quickack": false, 00:21:53.764 "enable_placement_id": 0, 00:21:53.764 "enable_zerocopy_send_server": true, 00:21:53.764 "enable_zerocopy_send_client": false, 00:21:53.764 "zerocopy_threshold": 0, 00:21:53.764 "tls_version": 0, 00:21:53.764 "enable_ktls": false 00:21:53.764 } 00:21:53.764 } 00:21:53.764 ] 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "subsystem": "vmd", 00:21:53.764 "config": [] 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "subsystem": "accel", 00:21:53.764 "config": [ 00:21:53.764 { 00:21:53.764 "method": "accel_set_options", 00:21:53.764 "params": { 00:21:53.764 "small_cache_size": 128, 00:21:53.764 "large_cache_size": 16, 00:21:53.764 "task_count": 2048, 00:21:53.764 "sequence_count": 2048, 00:21:53.764 "buf_count": 2048 00:21:53.764 } 00:21:53.764 } 00:21:53.764 ] 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "subsystem": "bdev", 00:21:53.764 "config": [ 00:21:53.764 { 00:21:53.764 "method": "bdev_set_options", 00:21:53.764 "params": { 00:21:53.764 "bdev_io_pool_size": 65535, 00:21:53.764 "bdev_io_cache_size": 256, 00:21:53.764 "bdev_auto_examine": true, 00:21:53.764 "iobuf_small_cache_size": 128, 00:21:53.764 "iobuf_large_cache_size": 16 00:21:53.764 } 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "method": "bdev_raid_set_options", 00:21:53.764 "params": { 00:21:53.764 "process_window_size_kb": 1024 00:21:53.764 } 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "method": "bdev_iscsi_set_options", 00:21:53.764 "params": { 00:21:53.764 "timeout_sec": 30 00:21:53.764 } 00:21:53.764 }, 00:21:53.764 { 00:21:53.764 "method": "bdev_nvme_set_options", 00:21:53.764 "params": { 00:21:53.764 "action_on_timeout": "none", 00:21:53.764 "timeout_us": 0, 00:21:53.764 "timeout_admin_us": 0, 00:21:53.764 "keep_alive_timeout_ms": 10000, 00:21:53.764 "arbitration_burst": 0, 00:21:53.764 "low_priority_weight": 0, 00:21:53.764 "medium_priority_weight": 0, 00:21:53.764 "high_priority_weight": 0, 00:21:53.764 "nvme_adminq_poll_period_us": 10000, 00:21:53.764 "nvme_ioq_poll_period_us": 0, 00:21:53.764 "io_queue_requests": 0, 00:21:53.764 "delay_cmd_submit": true, 00:21:53.764 "transport_retry_count": 4, 00:21:53.764 "bdev_retry_count": 3, 00:21:53.764 "transport_ack_timeout": 0, 00:21:53.764 "ctrlr_loss_timeout_sec": 0, 00:21:53.764 "reconnect_delay_sec": 0, 00:21:53.764 "fast_io_fail_timeout_sec": 0, 00:21:53.764 "disable_auto_failback": false, 00:21:53.764 "generate_uuids": false, 00:21:53.764 "transport_tos": 0, 00:21:53.764 "nvme_error_stat": false, 00:21:53.764 "rdma_srq_size": 0, 00:21:53.764 "io_path_stat": false, 00:21:53.764 "allow_accel_sequence": false, 00:21:53.764 "rdma_max_cq_size": 0, 00:21:53.764 "rdma_cm_event_timeout_ms": 0, 00:21:53.764 "dhchap_digests": [ 00:21:53.764 "sha256", 00:21:53.765 "sha384", 00:21:53.765 "sha512" 00:21:53.765 ], 00:21:53.765 "dhchap_dhgroups": [ 00:21:53.765 "null", 00:21:53.765 "ffdhe2048", 00:21:53.765 "ffdhe3072", 00:21:53.765 "ffdhe4096", 00:21:53.765 "ffdhe6144", 00:21:53.765 "ffdhe8192" 00:21:53.765 ] 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "bdev_nvme_set_hotplug", 00:21:53.765 "params": { 00:21:53.765 "period_us": 100000, 00:21:53.765 "enable": false 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "bdev_malloc_create", 00:21:53.765 "params": { 00:21:53.765 "name": "malloc0", 00:21:53.765 "num_blocks": 8192, 00:21:53.765 "block_size": 4096, 00:21:53.765 "physical_block_size": 4096, 00:21:53.765 "uuid": "4375a17c-f4b0-4bc2-a4ca-c88beeabaf11", 00:21:53.765 "optimal_io_boundary": 0 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "bdev_wait_for_examine" 00:21:53.765 } 00:21:53.765 ] 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "subsystem": "nbd", 00:21:53.765 "config": [] 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "subsystem": "scheduler", 00:21:53.765 "config": [ 00:21:53.765 { 00:21:53.765 "method": "framework_set_scheduler", 00:21:53.765 "params": { 00:21:53.765 "name": "static" 00:21:53.765 } 00:21:53.765 } 00:21:53.765 ] 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "subsystem": "nvmf", 00:21:53.765 "config": [ 00:21:53.765 { 00:21:53.765 "method": "nvmf_set_config", 00:21:53.765 "params": { 00:21:53.765 "discovery_filter": "match_any", 00:21:53.765 "admin_cmd_passthru": { 00:21:53.765 "identify_ctrlr": false 00:21:53.765 } 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "nvmf_set_max_subsystems", 00:21:53.765 "params": { 00:21:53.765 "max_subsystems": 1024 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "nvmf_set_crdt", 00:21:53.765 "params": { 00:21:53.765 "crdt1": 0, 00:21:53.765 "crdt2": 0, 00:21:53.765 "crdt3": 0 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "nvmf_create_transport", 00:21:53.765 "params": { 00:21:53.765 "trtype": "TCP", 00:21:53.765 "max_queue_depth": 128, 00:21:53.765 "max_io_qpairs_per_ctrlr": 127, 00:21:53.765 "in_capsule_data_size": 4096, 00:21:53.765 "max_io_size": 131072, 00:21:53.765 "io_unit_size": 131072, 00:21:53.765 "max_aq_depth": 128, 00:21:53.765 "num_shared_buffers": 511, 00:21:53.765 "buf_cache_size": 4294967295, 00:21:53.765 "dif_insert_or_strip": false, 00:21:53.765 "zcopy": false, 00:21:53.765 "c2h_success": false, 00:21:53.765 "sock_priority": 0, 00:21:53.765 "abort_timeout_sec": 1, 00:21:53.765 "ack_timeout": 0, 00:21:53.765 "data_wr_pool_size": 0 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "nvmf_create_subsystem", 00:21:53.765 "params": { 00:21:53.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.765 "allow_any_host": false, 00:21:53.765 "serial_number": "SPDK00000000000001", 00:21:53.765 "model_number": "SPDK bdev Controller", 00:21:53.765 "max_namespaces": 10, 00:21:53.765 "min_cntlid": 1, 00:21:53.765 "max_cntlid": 65519, 00:21:53.765 "ana_reporting": false 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "nvmf_subsystem_add_host", 00:21:53.765 "params": { 00:21:53.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.765 "host": "nqn.2016-06.io.spdk:host1", 00:21:53.765 "psk": "/tmp/tmp.9cDEgfiRJN" 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "nvmf_subsystem_add_ns", 00:21:53.765 "params": { 00:21:53.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.765 "namespace": { 00:21:53.765 "nsid": 1, 00:21:53.765 "bdev_name": "malloc0", 00:21:53.765 "nguid": "4375A17CF4B04BC2A4CAC88BEEABAF11", 00:21:53.765 "uuid": "4375a17c-f4b0-4bc2-a4ca-c88beeabaf11", 00:21:53.765 "no_auto_visible": false 00:21:53.765 } 00:21:53.765 } 00:21:53.765 }, 00:21:53.765 { 00:21:53.765 "method": "nvmf_subsystem_add_listener", 00:21:53.765 "params": { 00:21:53.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:53.765 "listen_address": { 00:21:53.765 "trtype": "TCP", 00:21:53.765 "adrfam": "IPv4", 00:21:53.765 "traddr": "10.0.0.2", 00:21:53.765 "trsvcid": "4420" 00:21:53.765 }, 00:21:53.765 "secure_channel": true 00:21:53.765 } 00:21:53.765 } 00:21:53.765 ] 00:21:53.765 } 00:21:53.765 ] 00:21:53.765 }' 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1470483 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1470483 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1470483 ']' 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:53.765 19:17:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.026 [2024-07-12 19:17:59.897817] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:54.026 [2024-07-12 19:17:59.897873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.026 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.026 [2024-07-12 19:17:59.978945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.026 [2024-07-12 19:18:00.036269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.026 [2024-07-12 19:18:00.036301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.026 [2024-07-12 19:18:00.036307] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.026 [2024-07-12 19:18:00.036312] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.026 [2024-07-12 19:18:00.036317] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.026 [2024-07-12 19:18:00.036357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.286 [2024-07-12 19:18:00.219298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:54.286 [2024-07-12 19:18:00.235269] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:54.286 [2024-07-12 19:18:00.251314] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:54.286 [2024-07-12 19:18:00.260280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:54.546 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:54.546 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:54.546 19:18:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:54.546 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:54.546 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1470551 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1470551 /var/tmp/bdevperf.sock 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1470551 ']' 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.807 19:18:00 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:54.807 "subsystems": [ 00:21:54.807 { 00:21:54.807 "subsystem": "keyring", 00:21:54.807 "config": [] 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "subsystem": "iobuf", 00:21:54.807 "config": [ 00:21:54.807 { 00:21:54.807 "method": "iobuf_set_options", 00:21:54.807 "params": { 00:21:54.807 "small_pool_count": 8192, 00:21:54.807 "large_pool_count": 1024, 00:21:54.807 "small_bufsize": 8192, 00:21:54.807 "large_bufsize": 135168 00:21:54.807 } 00:21:54.807 } 00:21:54.807 ] 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "subsystem": "sock", 00:21:54.807 "config": [ 00:21:54.807 { 00:21:54.807 "method": "sock_set_default_impl", 00:21:54.807 "params": { 00:21:54.807 "impl_name": "posix" 00:21:54.807 } 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "method": "sock_impl_set_options", 00:21:54.807 "params": { 00:21:54.807 "impl_name": "ssl", 00:21:54.807 "recv_buf_size": 4096, 00:21:54.807 "send_buf_size": 4096, 00:21:54.807 "enable_recv_pipe": true, 00:21:54.807 "enable_quickack": false, 00:21:54.807 "enable_placement_id": 0, 00:21:54.807 "enable_zerocopy_send_server": true, 00:21:54.807 "enable_zerocopy_send_client": false, 00:21:54.807 "zerocopy_threshold": 0, 00:21:54.807 "tls_version": 0, 00:21:54.807 "enable_ktls": false 00:21:54.807 } 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "method": "sock_impl_set_options", 00:21:54.807 "params": { 00:21:54.807 "impl_name": "posix", 00:21:54.807 "recv_buf_size": 2097152, 00:21:54.807 "send_buf_size": 2097152, 00:21:54.807 "enable_recv_pipe": true, 00:21:54.807 "enable_quickack": false, 00:21:54.807 "enable_placement_id": 0, 00:21:54.807 "enable_zerocopy_send_server": true, 00:21:54.807 "enable_zerocopy_send_client": false, 00:21:54.807 "zerocopy_threshold": 0, 00:21:54.807 "tls_version": 0, 00:21:54.807 "enable_ktls": false 00:21:54.807 } 00:21:54.807 } 00:21:54.807 ] 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "subsystem": "vmd", 00:21:54.807 "config": [] 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "subsystem": "accel", 00:21:54.807 "config": [ 00:21:54.807 { 00:21:54.807 "method": "accel_set_options", 00:21:54.807 "params": { 00:21:54.807 "small_cache_size": 128, 00:21:54.807 "large_cache_size": 16, 00:21:54.807 "task_count": 2048, 00:21:54.807 "sequence_count": 2048, 00:21:54.807 "buf_count": 2048 00:21:54.807 } 00:21:54.807 } 00:21:54.807 ] 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "subsystem": "bdev", 00:21:54.807 "config": [ 00:21:54.807 { 00:21:54.807 "method": "bdev_set_options", 00:21:54.807 "params": { 00:21:54.807 "bdev_io_pool_size": 65535, 00:21:54.807 "bdev_io_cache_size": 256, 00:21:54.807 "bdev_auto_examine": true, 00:21:54.807 "iobuf_small_cache_size": 128, 00:21:54.807 "iobuf_large_cache_size": 16 00:21:54.807 } 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "method": "bdev_raid_set_options", 00:21:54.807 "params": { 00:21:54.807 "process_window_size_kb": 1024 00:21:54.807 } 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "method": "bdev_iscsi_set_options", 00:21:54.807 "params": { 00:21:54.807 "timeout_sec": 30 00:21:54.807 } 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "method": "bdev_nvme_set_options", 00:21:54.807 "params": { 00:21:54.807 "action_on_timeout": "none", 00:21:54.807 "timeout_us": 0, 00:21:54.807 "timeout_admin_us": 0, 00:21:54.807 "keep_alive_timeout_ms": 10000, 00:21:54.807 "arbitration_burst": 0, 00:21:54.807 "low_priority_weight": 0, 00:21:54.807 "medium_priority_weight": 0, 00:21:54.807 "high_priority_weight": 0, 00:21:54.807 "nvme_adminq_poll_period_us": 10000, 00:21:54.807 "nvme_ioq_poll_period_us": 0, 00:21:54.807 "io_queue_requests": 512, 00:21:54.807 "delay_cmd_submit": true, 00:21:54.807 "transport_retry_count": 4, 00:21:54.807 "bdev_retry_count": 3, 00:21:54.807 "transport_ack_timeout": 0, 00:21:54.807 "ctrlr_loss_timeout_sec": 0, 00:21:54.807 "reconnect_delay_sec": 0, 00:21:54.807 "fast_io_fail_timeout_sec": 0, 00:21:54.807 "disable_auto_failback": false, 00:21:54.807 "generate_uuids": false, 00:21:54.807 "transport_tos": 0, 00:21:54.807 "nvme_error_stat": false, 00:21:54.807 "rdma_srq_size": 0, 00:21:54.807 "io_path_stat": false, 00:21:54.807 "allow_accel_sequence": false, 00:21:54.807 "rdma_max_cq_size": 0, 00:21:54.807 "rdma_cm_event_timeout_ms": 0, 00:21:54.807 "dhchap_digests": [ 00:21:54.807 "sha256", 00:21:54.807 "sha384", 00:21:54.807 "sha512" 00:21:54.807 ], 00:21:54.807 "dhchap_dhgroups": [ 00:21:54.807 "null", 00:21:54.807 "ffdhe2048", 00:21:54.807 "ffdhe3072", 00:21:54.807 "ffdhe4096", 00:21:54.807 "ffdhe6144", 00:21:54.807 "ffdhe8192" 00:21:54.807 ] 00:21:54.807 } 00:21:54.807 }, 00:21:54.807 { 00:21:54.807 "method": "bdev_nvme_attach_controller", 00:21:54.807 "params": { 00:21:54.807 "name": "TLSTEST", 00:21:54.807 "trtype": "TCP", 00:21:54.807 "adrfam": "IPv4", 00:21:54.807 "traddr": "10.0.0.2", 00:21:54.807 "trsvcid": "4420", 00:21:54.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:54.807 "prchk_reftag": false, 00:21:54.807 "prchk_guard": false, 00:21:54.807 "ctrlr_loss_timeout_sec": 0, 00:21:54.807 "reconnect_delay_sec": 0, 00:21:54.807 "fast_io_fail_timeout_sec": 0, 00:21:54.807 "psk": "/tmp/tmp.9cDEgfiRJN", 00:21:54.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:54.808 "hdgst": false, 00:21:54.808 "ddgst": false 00:21:54.808 } 00:21:54.808 }, 00:21:54.808 { 00:21:54.808 "method": "bdev_nvme_set_hotplug", 00:21:54.808 "params": { 00:21:54.808 "period_us": 100000, 00:21:54.808 "enable": false 00:21:54.808 } 00:21:54.808 }, 00:21:54.808 { 00:21:54.808 "method": "bdev_wait_for_examine" 00:21:54.808 } 00:21:54.808 ] 00:21:54.808 }, 00:21:54.808 { 00:21:54.808 "subsystem": "nbd", 00:21:54.808 "config": [] 00:21:54.808 } 00:21:54.808 ] 00:21:54.808 }' 00:21:54.808 [2024-07-12 19:18:00.740912] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:21:54.808 [2024-07-12 19:18:00.740966] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1470551 ] 00:21:54.808 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.808 [2024-07-12 19:18:00.791056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.808 [2024-07-12 19:18:00.843532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.068 [2024-07-12 19:18:00.968092] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.068 [2024-07-12 19:18:00.968176] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:55.638 19:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.638 19:18:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:55.638 19:18:01 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:55.638 Running I/O for 10 seconds... 00:22:05.639 00:22:05.639 Latency(us) 00:22:05.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.639 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:05.639 Verification LBA range: start 0x0 length 0x2000 00:22:05.639 TLSTESTn1 : 10.05 3352.51 13.10 0.00 0.00 38071.17 5843.63 54613.33 00:22:05.639 =================================================================================================================== 00:22:05.639 Total : 3352.51 13.10 0.00 0.00 38071.17 5843.63 54613.33 00:22:05.639 0 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1470551 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1470551 ']' 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1470551 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1470551 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1470551' 00:22:05.639 killing process with pid 1470551 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1470551 00:22:05.639 Received shutdown signal, test time was about 10.000000 seconds 00:22:05.639 00:22:05.639 Latency(us) 00:22:05.639 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:05.639 =================================================================================================================== 00:22:05.639 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:05.639 [2024-07-12 19:18:11.752840] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:05.639 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1470551 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1470483 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1470483 ']' 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1470483 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1470483 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1470483' 00:22:05.900 killing process with pid 1470483 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1470483 00:22:05.900 [2024-07-12 19:18:11.918580] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:05.900 19:18:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1470483 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1473419 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1473419 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1473419 ']' 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:06.162 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.162 [2024-07-12 19:18:12.094433] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:22:06.162 [2024-07-12 19:18:12.094488] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.162 EAL: No free 2048 kB hugepages reported on node 1 00:22:06.162 [2024-07-12 19:18:12.160116] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:06.162 [2024-07-12 19:18:12.222516] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:06.162 [2024-07-12 19:18:12.222557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:06.162 [2024-07-12 19:18:12.222565] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:06.162 [2024-07-12 19:18:12.222571] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:06.162 [2024-07-12 19:18:12.222577] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:06.162 [2024-07-12 19:18:12.222608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.9cDEgfiRJN 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.9cDEgfiRJN 00:22:07.103 19:18:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:07.103 [2024-07-12 19:18:13.045547] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.103 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:07.103 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:07.363 [2024-07-12 19:18:13.342285] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:07.363 [2024-07-12 19:18:13.342512] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:07.363 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:07.623 malloc0 00:22:07.624 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:07.624 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.9cDEgfiRJN 00:22:07.884 [2024-07-12 19:18:13.794386] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1473782 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1473782 /var/tmp/bdevperf.sock 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1473782 ']' 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:07.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.884 19:18:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:07.884 [2024-07-12 19:18:13.857204] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:22:07.884 [2024-07-12 19:18:13.857253] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1473782 ] 00:22:07.884 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.884 [2024-07-12 19:18:13.931582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:07.884 [2024-07-12 19:18:13.985114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:08.555 19:18:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.555 19:18:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:08.555 19:18:14 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9cDEgfiRJN 00:22:08.816 19:18:14 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:08.816 [2024-07-12 19:18:14.906983] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:09.077 nvme0n1 00:22:09.077 19:18:15 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:09.077 Running I/O for 1 seconds... 00:22:10.017 00:22:10.017 Latency(us) 00:22:10.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.017 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:10.017 Verification LBA range: start 0x0 length 0x2000 00:22:10.017 nvme0n1 : 1.07 2229.73 8.71 0.00 0.00 55810.96 4969.81 108789.76 00:22:10.017 =================================================================================================================== 00:22:10.017 Total : 2229.73 8.71 0.00 0.00 55810.96 4969.81 108789.76 00:22:10.017 0 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1473782 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1473782 ']' 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1473782 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473782 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473782' 00:22:10.278 killing process with pid 1473782 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1473782 00:22:10.278 Received shutdown signal, test time was about 1.000000 seconds 00:22:10.278 00:22:10.278 Latency(us) 00:22:10.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.278 =================================================================================================================== 00:22:10.278 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1473782 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1473419 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1473419 ']' 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1473419 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1473419 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1473419' 00:22:10.278 killing process with pid 1473419 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1473419 00:22:10.278 [2024-07-12 19:18:16.388306] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:10.278 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1473419 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1474182 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1474182 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1474182 ']' 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:10.539 19:18:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:10.539 [2024-07-12 19:18:16.588170] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:22:10.539 [2024-07-12 19:18:16.588228] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.539 EAL: No free 2048 kB hugepages reported on node 1 00:22:10.539 [2024-07-12 19:18:16.654661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.799 [2024-07-12 19:18:16.719337] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.799 [2024-07-12 19:18:16.719375] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.799 [2024-07-12 19:18:16.719382] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.799 [2024-07-12 19:18:16.719389] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.799 [2024-07-12 19:18:16.719394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.799 [2024-07-12 19:18:16.719414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.370 [2024-07-12 19:18:17.401822] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:11.370 malloc0 00:22:11.370 [2024-07-12 19:18:17.428627] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:11.370 [2024-07-12 19:18:17.428851] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1474491 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1474491 /var/tmp/bdevperf.sock 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1474491 ']' 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:11.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:11.370 19:18:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:11.630 [2024-07-12 19:18:17.506474] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:22:11.630 [2024-07-12 19:18:17.506521] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1474491 ] 00:22:11.630 EAL: No free 2048 kB hugepages reported on node 1 00:22:11.630 [2024-07-12 19:18:17.580834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:11.630 [2024-07-12 19:18:17.634189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.201 19:18:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:12.201 19:18:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:12.201 19:18:18 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.9cDEgfiRJN 00:22:12.462 19:18:18 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:12.463 [2024-07-12 19:18:18.560053] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:12.723 nvme0n1 00:22:12.723 19:18:18 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:12.723 Running I/O for 1 seconds... 00:22:14.105 00:22:14.105 Latency(us) 00:22:14.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.105 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:14.105 Verification LBA range: start 0x0 length 0x2000 00:22:14.105 nvme0n1 : 1.07 2074.44 8.10 0.00 0.00 59971.75 4587.52 108789.76 00:22:14.105 =================================================================================================================== 00:22:14.105 Total : 2074.44 8.10 0.00 0.00 59971.75 4587.52 108789.76 00:22:14.105 0 00:22:14.105 19:18:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:14.105 19:18:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.105 19:18:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.105 19:18:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.105 19:18:19 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:14.105 "subsystems": [ 00:22:14.105 { 00:22:14.105 "subsystem": "keyring", 00:22:14.105 "config": [ 00:22:14.105 { 00:22:14.105 "method": "keyring_file_add_key", 00:22:14.105 "params": { 00:22:14.105 "name": "key0", 00:22:14.105 "path": "/tmp/tmp.9cDEgfiRJN" 00:22:14.105 } 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "subsystem": "iobuf", 00:22:14.105 "config": [ 00:22:14.105 { 00:22:14.105 "method": "iobuf_set_options", 00:22:14.105 "params": { 00:22:14.105 "small_pool_count": 8192, 00:22:14.105 "large_pool_count": 1024, 00:22:14.105 "small_bufsize": 8192, 00:22:14.105 "large_bufsize": 135168 00:22:14.105 } 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "subsystem": "sock", 00:22:14.105 "config": [ 00:22:14.105 { 00:22:14.105 "method": "sock_set_default_impl", 00:22:14.105 "params": { 00:22:14.105 "impl_name": "posix" 00:22:14.105 } 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "method": "sock_impl_set_options", 00:22:14.105 "params": { 00:22:14.105 "impl_name": "ssl", 00:22:14.105 "recv_buf_size": 4096, 00:22:14.105 "send_buf_size": 4096, 00:22:14.105 "enable_recv_pipe": true, 00:22:14.105 "enable_quickack": false, 00:22:14.105 "enable_placement_id": 0, 00:22:14.105 "enable_zerocopy_send_server": true, 00:22:14.105 "enable_zerocopy_send_client": false, 00:22:14.105 "zerocopy_threshold": 0, 00:22:14.105 "tls_version": 0, 00:22:14.105 "enable_ktls": false 00:22:14.105 } 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "method": "sock_impl_set_options", 00:22:14.105 "params": { 00:22:14.105 "impl_name": "posix", 00:22:14.105 "recv_buf_size": 2097152, 00:22:14.105 "send_buf_size": 2097152, 00:22:14.105 "enable_recv_pipe": true, 00:22:14.105 "enable_quickack": false, 00:22:14.105 "enable_placement_id": 0, 00:22:14.105 "enable_zerocopy_send_server": true, 00:22:14.105 "enable_zerocopy_send_client": false, 00:22:14.105 "zerocopy_threshold": 0, 00:22:14.105 "tls_version": 0, 00:22:14.105 "enable_ktls": false 00:22:14.105 } 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "subsystem": "vmd", 00:22:14.105 "config": [] 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "subsystem": "accel", 00:22:14.105 "config": [ 00:22:14.105 { 00:22:14.105 "method": "accel_set_options", 00:22:14.105 "params": { 00:22:14.105 "small_cache_size": 128, 00:22:14.105 "large_cache_size": 16, 00:22:14.105 "task_count": 2048, 00:22:14.105 "sequence_count": 2048, 00:22:14.105 "buf_count": 2048 00:22:14.105 } 00:22:14.105 } 00:22:14.105 ] 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "subsystem": "bdev", 00:22:14.105 "config": [ 00:22:14.105 { 00:22:14.105 "method": "bdev_set_options", 00:22:14.105 "params": { 00:22:14.105 "bdev_io_pool_size": 65535, 00:22:14.105 "bdev_io_cache_size": 256, 00:22:14.105 "bdev_auto_examine": true, 00:22:14.105 "iobuf_small_cache_size": 128, 00:22:14.105 "iobuf_large_cache_size": 16 00:22:14.105 } 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "method": "bdev_raid_set_options", 00:22:14.105 "params": { 00:22:14.105 "process_window_size_kb": 1024 00:22:14.105 } 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "method": "bdev_iscsi_set_options", 00:22:14.105 "params": { 00:22:14.105 "timeout_sec": 30 00:22:14.105 } 00:22:14.105 }, 00:22:14.105 { 00:22:14.105 "method": "bdev_nvme_set_options", 00:22:14.105 "params": { 00:22:14.105 "action_on_timeout": "none", 00:22:14.105 "timeout_us": 0, 00:22:14.105 "timeout_admin_us": 0, 00:22:14.105 "keep_alive_timeout_ms": 10000, 00:22:14.105 "arbitration_burst": 0, 00:22:14.105 "low_priority_weight": 0, 00:22:14.105 "medium_priority_weight": 0, 00:22:14.105 "high_priority_weight": 0, 00:22:14.105 "nvme_adminq_poll_period_us": 10000, 00:22:14.105 "nvme_ioq_poll_period_us": 0, 00:22:14.105 "io_queue_requests": 0, 00:22:14.105 "delay_cmd_submit": true, 00:22:14.105 "transport_retry_count": 4, 00:22:14.105 "bdev_retry_count": 3, 00:22:14.105 "transport_ack_timeout": 0, 00:22:14.105 "ctrlr_loss_timeout_sec": 0, 00:22:14.105 "reconnect_delay_sec": 0, 00:22:14.105 "fast_io_fail_timeout_sec": 0, 00:22:14.105 "disable_auto_failback": false, 00:22:14.105 "generate_uuids": false, 00:22:14.105 "transport_tos": 0, 00:22:14.105 "nvme_error_stat": false, 00:22:14.105 "rdma_srq_size": 0, 00:22:14.105 "io_path_stat": false, 00:22:14.105 "allow_accel_sequence": false, 00:22:14.105 "rdma_max_cq_size": 0, 00:22:14.105 "rdma_cm_event_timeout_ms": 0, 00:22:14.105 "dhchap_digests": [ 00:22:14.105 "sha256", 00:22:14.105 "sha384", 00:22:14.105 "sha512" 00:22:14.105 ], 00:22:14.105 "dhchap_dhgroups": [ 00:22:14.105 "null", 00:22:14.105 "ffdhe2048", 00:22:14.105 "ffdhe3072", 00:22:14.105 "ffdhe4096", 00:22:14.105 "ffdhe6144", 00:22:14.106 "ffdhe8192" 00:22:14.106 ] 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "bdev_nvme_set_hotplug", 00:22:14.106 "params": { 00:22:14.106 "period_us": 100000, 00:22:14.106 "enable": false 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "bdev_malloc_create", 00:22:14.106 "params": { 00:22:14.106 "name": "malloc0", 00:22:14.106 "num_blocks": 8192, 00:22:14.106 "block_size": 4096, 00:22:14.106 "physical_block_size": 4096, 00:22:14.106 "uuid": "4ca432db-f095-41f9-b731-18559aa4c8bd", 00:22:14.106 "optimal_io_boundary": 0 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "bdev_wait_for_examine" 00:22:14.106 } 00:22:14.106 ] 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "subsystem": "nbd", 00:22:14.106 "config": [] 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "subsystem": "scheduler", 00:22:14.106 "config": [ 00:22:14.106 { 00:22:14.106 "method": "framework_set_scheduler", 00:22:14.106 "params": { 00:22:14.106 "name": "static" 00:22:14.106 } 00:22:14.106 } 00:22:14.106 ] 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "subsystem": "nvmf", 00:22:14.106 "config": [ 00:22:14.106 { 00:22:14.106 "method": "nvmf_set_config", 00:22:14.106 "params": { 00:22:14.106 "discovery_filter": "match_any", 00:22:14.106 "admin_cmd_passthru": { 00:22:14.106 "identify_ctrlr": false 00:22:14.106 } 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "nvmf_set_max_subsystems", 00:22:14.106 "params": { 00:22:14.106 "max_subsystems": 1024 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "nvmf_set_crdt", 00:22:14.106 "params": { 00:22:14.106 "crdt1": 0, 00:22:14.106 "crdt2": 0, 00:22:14.106 "crdt3": 0 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "nvmf_create_transport", 00:22:14.106 "params": { 00:22:14.106 "trtype": "TCP", 00:22:14.106 "max_queue_depth": 128, 00:22:14.106 "max_io_qpairs_per_ctrlr": 127, 00:22:14.106 "in_capsule_data_size": 4096, 00:22:14.106 "max_io_size": 131072, 00:22:14.106 "io_unit_size": 131072, 00:22:14.106 "max_aq_depth": 128, 00:22:14.106 "num_shared_buffers": 511, 00:22:14.106 "buf_cache_size": 4294967295, 00:22:14.106 "dif_insert_or_strip": false, 00:22:14.106 "zcopy": false, 00:22:14.106 "c2h_success": false, 00:22:14.106 "sock_priority": 0, 00:22:14.106 "abort_timeout_sec": 1, 00:22:14.106 "ack_timeout": 0, 00:22:14.106 "data_wr_pool_size": 0 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "nvmf_create_subsystem", 00:22:14.106 "params": { 00:22:14.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.106 "allow_any_host": false, 00:22:14.106 "serial_number": "00000000000000000000", 00:22:14.106 "model_number": "SPDK bdev Controller", 00:22:14.106 "max_namespaces": 32, 00:22:14.106 "min_cntlid": 1, 00:22:14.106 "max_cntlid": 65519, 00:22:14.106 "ana_reporting": false 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "nvmf_subsystem_add_host", 00:22:14.106 "params": { 00:22:14.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.106 "host": "nqn.2016-06.io.spdk:host1", 00:22:14.106 "psk": "key0" 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "nvmf_subsystem_add_ns", 00:22:14.106 "params": { 00:22:14.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.106 "namespace": { 00:22:14.106 "nsid": 1, 00:22:14.106 "bdev_name": "malloc0", 00:22:14.106 "nguid": "4CA432DBF09541F9B73118559AA4C8BD", 00:22:14.106 "uuid": "4ca432db-f095-41f9-b731-18559aa4c8bd", 00:22:14.106 "no_auto_visible": false 00:22:14.106 } 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "nvmf_subsystem_add_listener", 00:22:14.106 "params": { 00:22:14.106 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.106 "listen_address": { 00:22:14.106 "trtype": "TCP", 00:22:14.106 "adrfam": "IPv4", 00:22:14.106 "traddr": "10.0.0.2", 00:22:14.106 "trsvcid": "4420" 00:22:14.106 }, 00:22:14.106 "secure_channel": true 00:22:14.106 } 00:22:14.106 } 00:22:14.106 ] 00:22:14.106 } 00:22:14.106 ] 00:22:14.106 }' 00:22:14.106 19:18:19 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:14.106 19:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:14.106 "subsystems": [ 00:22:14.106 { 00:22:14.106 "subsystem": "keyring", 00:22:14.106 "config": [ 00:22:14.106 { 00:22:14.106 "method": "keyring_file_add_key", 00:22:14.106 "params": { 00:22:14.106 "name": "key0", 00:22:14.106 "path": "/tmp/tmp.9cDEgfiRJN" 00:22:14.106 } 00:22:14.106 } 00:22:14.106 ] 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "subsystem": "iobuf", 00:22:14.106 "config": [ 00:22:14.106 { 00:22:14.106 "method": "iobuf_set_options", 00:22:14.106 "params": { 00:22:14.106 "small_pool_count": 8192, 00:22:14.106 "large_pool_count": 1024, 00:22:14.106 "small_bufsize": 8192, 00:22:14.106 "large_bufsize": 135168 00:22:14.106 } 00:22:14.106 } 00:22:14.106 ] 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "subsystem": "sock", 00:22:14.106 "config": [ 00:22:14.106 { 00:22:14.106 "method": "sock_set_default_impl", 00:22:14.106 "params": { 00:22:14.106 "impl_name": "posix" 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "sock_impl_set_options", 00:22:14.106 "params": { 00:22:14.106 "impl_name": "ssl", 00:22:14.106 "recv_buf_size": 4096, 00:22:14.106 "send_buf_size": 4096, 00:22:14.106 "enable_recv_pipe": true, 00:22:14.106 "enable_quickack": false, 00:22:14.106 "enable_placement_id": 0, 00:22:14.106 "enable_zerocopy_send_server": true, 00:22:14.106 "enable_zerocopy_send_client": false, 00:22:14.106 "zerocopy_threshold": 0, 00:22:14.106 "tls_version": 0, 00:22:14.106 "enable_ktls": false 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "sock_impl_set_options", 00:22:14.106 "params": { 00:22:14.106 "impl_name": "posix", 00:22:14.106 "recv_buf_size": 2097152, 00:22:14.106 "send_buf_size": 2097152, 00:22:14.106 "enable_recv_pipe": true, 00:22:14.106 "enable_quickack": false, 00:22:14.106 "enable_placement_id": 0, 00:22:14.106 "enable_zerocopy_send_server": true, 00:22:14.106 "enable_zerocopy_send_client": false, 00:22:14.106 "zerocopy_threshold": 0, 00:22:14.106 "tls_version": 0, 00:22:14.106 "enable_ktls": false 00:22:14.106 } 00:22:14.106 } 00:22:14.106 ] 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "subsystem": "vmd", 00:22:14.106 "config": [] 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "subsystem": "accel", 00:22:14.106 "config": [ 00:22:14.106 { 00:22:14.106 "method": "accel_set_options", 00:22:14.106 "params": { 00:22:14.106 "small_cache_size": 128, 00:22:14.106 "large_cache_size": 16, 00:22:14.106 "task_count": 2048, 00:22:14.106 "sequence_count": 2048, 00:22:14.106 "buf_count": 2048 00:22:14.106 } 00:22:14.106 } 00:22:14.106 ] 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "subsystem": "bdev", 00:22:14.106 "config": [ 00:22:14.106 { 00:22:14.106 "method": "bdev_set_options", 00:22:14.106 "params": { 00:22:14.106 "bdev_io_pool_size": 65535, 00:22:14.106 "bdev_io_cache_size": 256, 00:22:14.106 "bdev_auto_examine": true, 00:22:14.106 "iobuf_small_cache_size": 128, 00:22:14.106 "iobuf_large_cache_size": 16 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "bdev_raid_set_options", 00:22:14.106 "params": { 00:22:14.106 "process_window_size_kb": 1024 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "bdev_iscsi_set_options", 00:22:14.106 "params": { 00:22:14.106 "timeout_sec": 30 00:22:14.106 } 00:22:14.106 }, 00:22:14.106 { 00:22:14.106 "method": "bdev_nvme_set_options", 00:22:14.106 "params": { 00:22:14.106 "action_on_timeout": "none", 00:22:14.106 "timeout_us": 0, 00:22:14.106 "timeout_admin_us": 0, 00:22:14.106 "keep_alive_timeout_ms": 10000, 00:22:14.106 "arbitration_burst": 0, 00:22:14.106 "low_priority_weight": 0, 00:22:14.106 "medium_priority_weight": 0, 00:22:14.106 "high_priority_weight": 0, 00:22:14.106 "nvme_adminq_poll_period_us": 10000, 00:22:14.107 "nvme_ioq_poll_period_us": 0, 00:22:14.107 "io_queue_requests": 512, 00:22:14.107 "delay_cmd_submit": true, 00:22:14.107 "transport_retry_count": 4, 00:22:14.107 "bdev_retry_count": 3, 00:22:14.107 "transport_ack_timeout": 0, 00:22:14.107 "ctrlr_loss_timeout_sec": 0, 00:22:14.107 "reconnect_delay_sec": 0, 00:22:14.107 "fast_io_fail_timeout_sec": 0, 00:22:14.107 "disable_auto_failback": false, 00:22:14.107 "generate_uuids": false, 00:22:14.107 "transport_tos": 0, 00:22:14.107 "nvme_error_stat": false, 00:22:14.107 "rdma_srq_size": 0, 00:22:14.107 "io_path_stat": false, 00:22:14.107 "allow_accel_sequence": false, 00:22:14.107 "rdma_max_cq_size": 0, 00:22:14.107 "rdma_cm_event_timeout_ms": 0, 00:22:14.107 "dhchap_digests": [ 00:22:14.107 "sha256", 00:22:14.107 "sha384", 00:22:14.107 "sha512" 00:22:14.107 ], 00:22:14.107 "dhchap_dhgroups": [ 00:22:14.107 "null", 00:22:14.107 "ffdhe2048", 00:22:14.107 "ffdhe3072", 00:22:14.107 "ffdhe4096", 00:22:14.107 "ffdhe6144", 00:22:14.107 "ffdhe8192" 00:22:14.107 ] 00:22:14.107 } 00:22:14.107 }, 00:22:14.107 { 00:22:14.107 "method": "bdev_nvme_attach_controller", 00:22:14.107 "params": { 00:22:14.107 "name": "nvme0", 00:22:14.107 "trtype": "TCP", 00:22:14.107 "adrfam": "IPv4", 00:22:14.107 "traddr": "10.0.0.2", 00:22:14.107 "trsvcid": "4420", 00:22:14.107 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.107 "prchk_reftag": false, 00:22:14.107 "prchk_guard": false, 00:22:14.107 "ctrlr_loss_timeout_sec": 0, 00:22:14.107 "reconnect_delay_sec": 0, 00:22:14.107 "fast_io_fail_timeout_sec": 0, 00:22:14.107 "psk": "key0", 00:22:14.107 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.107 "hdgst": false, 00:22:14.107 "ddgst": false 00:22:14.107 } 00:22:14.107 }, 00:22:14.107 { 00:22:14.107 "method": "bdev_nvme_set_hotplug", 00:22:14.107 "params": { 00:22:14.107 "period_us": 100000, 00:22:14.107 "enable": false 00:22:14.107 } 00:22:14.107 }, 00:22:14.107 { 00:22:14.107 "method": "bdev_enable_histogram", 00:22:14.107 "params": { 00:22:14.107 "name": "nvme0n1", 00:22:14.107 "enable": true 00:22:14.107 } 00:22:14.107 }, 00:22:14.107 { 00:22:14.107 "method": "bdev_wait_for_examine" 00:22:14.107 } 00:22:14.107 ] 00:22:14.107 }, 00:22:14.107 { 00:22:14.107 "subsystem": "nbd", 00:22:14.107 "config": [] 00:22:14.107 } 00:22:14.107 ] 00:22:14.107 }' 00:22:14.107 19:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1474491 00:22:14.107 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1474491 ']' 00:22:14.107 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1474491 00:22:14.107 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:14.107 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.107 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1474491 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1474491' 00:22:14.368 killing process with pid 1474491 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1474491 00:22:14.368 Received shutdown signal, test time was about 1.000000 seconds 00:22:14.368 00:22:14.368 Latency(us) 00:22:14.368 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.368 =================================================================================================================== 00:22:14.368 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1474491 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1474182 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1474182 ']' 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1474182 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1474182 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1474182' 00:22:14.368 killing process with pid 1474182 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1474182 00:22:14.368 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1474182 00:22:14.629 19:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:14.629 19:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:14.629 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:14.629 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.629 19:18:20 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:14.629 "subsystems": [ 00:22:14.629 { 00:22:14.629 "subsystem": "keyring", 00:22:14.629 "config": [ 00:22:14.629 { 00:22:14.629 "method": "keyring_file_add_key", 00:22:14.629 "params": { 00:22:14.629 "name": "key0", 00:22:14.629 "path": "/tmp/tmp.9cDEgfiRJN" 00:22:14.629 } 00:22:14.629 } 00:22:14.629 ] 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "subsystem": "iobuf", 00:22:14.629 "config": [ 00:22:14.629 { 00:22:14.629 "method": "iobuf_set_options", 00:22:14.629 "params": { 00:22:14.629 "small_pool_count": 8192, 00:22:14.629 "large_pool_count": 1024, 00:22:14.629 "small_bufsize": 8192, 00:22:14.629 "large_bufsize": 135168 00:22:14.629 } 00:22:14.629 } 00:22:14.629 ] 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "subsystem": "sock", 00:22:14.629 "config": [ 00:22:14.629 { 00:22:14.629 "method": "sock_set_default_impl", 00:22:14.629 "params": { 00:22:14.629 "impl_name": "posix" 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "sock_impl_set_options", 00:22:14.629 "params": { 00:22:14.629 "impl_name": "ssl", 00:22:14.629 "recv_buf_size": 4096, 00:22:14.629 "send_buf_size": 4096, 00:22:14.629 "enable_recv_pipe": true, 00:22:14.629 "enable_quickack": false, 00:22:14.629 "enable_placement_id": 0, 00:22:14.629 "enable_zerocopy_send_server": true, 00:22:14.629 "enable_zerocopy_send_client": false, 00:22:14.629 "zerocopy_threshold": 0, 00:22:14.629 "tls_version": 0, 00:22:14.629 "enable_ktls": false 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "sock_impl_set_options", 00:22:14.629 "params": { 00:22:14.629 "impl_name": "posix", 00:22:14.629 "recv_buf_size": 2097152, 00:22:14.629 "send_buf_size": 2097152, 00:22:14.629 "enable_recv_pipe": true, 00:22:14.629 "enable_quickack": false, 00:22:14.629 "enable_placement_id": 0, 00:22:14.629 "enable_zerocopy_send_server": true, 00:22:14.629 "enable_zerocopy_send_client": false, 00:22:14.629 "zerocopy_threshold": 0, 00:22:14.629 "tls_version": 0, 00:22:14.629 "enable_ktls": false 00:22:14.629 } 00:22:14.629 } 00:22:14.629 ] 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "subsystem": "vmd", 00:22:14.629 "config": [] 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "subsystem": "accel", 00:22:14.629 "config": [ 00:22:14.629 { 00:22:14.629 "method": "accel_set_options", 00:22:14.629 "params": { 00:22:14.629 "small_cache_size": 128, 00:22:14.629 "large_cache_size": 16, 00:22:14.629 "task_count": 2048, 00:22:14.629 "sequence_count": 2048, 00:22:14.629 "buf_count": 2048 00:22:14.629 } 00:22:14.629 } 00:22:14.629 ] 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "subsystem": "bdev", 00:22:14.629 "config": [ 00:22:14.629 { 00:22:14.629 "method": "bdev_set_options", 00:22:14.629 "params": { 00:22:14.629 "bdev_io_pool_size": 65535, 00:22:14.629 "bdev_io_cache_size": 256, 00:22:14.629 "bdev_auto_examine": true, 00:22:14.629 "iobuf_small_cache_size": 128, 00:22:14.629 "iobuf_large_cache_size": 16 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "bdev_raid_set_options", 00:22:14.629 "params": { 00:22:14.629 "process_window_size_kb": 1024 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "bdev_iscsi_set_options", 00:22:14.629 "params": { 00:22:14.629 "timeout_sec": 30 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "bdev_nvme_set_options", 00:22:14.629 "params": { 00:22:14.629 "action_on_timeout": "none", 00:22:14.629 "timeout_us": 0, 00:22:14.629 "timeout_admin_us": 0, 00:22:14.629 "keep_alive_timeout_ms": 10000, 00:22:14.629 "arbitration_burst": 0, 00:22:14.629 "low_priority_weight": 0, 00:22:14.629 "medium_priority_weight": 0, 00:22:14.629 "high_priority_weight": 0, 00:22:14.629 "nvme_adminq_poll_period_us": 10000, 00:22:14.629 "nvme_ioq_poll_period_us": 0, 00:22:14.629 "io_queue_requests": 0, 00:22:14.629 "delay_cmd_submit": true, 00:22:14.629 "transport_retry_count": 4, 00:22:14.629 "bdev_retry_count": 3, 00:22:14.629 "transport_ack_timeout": 0, 00:22:14.629 "ctrlr_loss_timeout_sec": 0, 00:22:14.629 "reconnect_delay_sec": 0, 00:22:14.629 "fast_io_fail_timeout_sec": 0, 00:22:14.629 "disable_auto_failback": false, 00:22:14.629 "generate_uuids": false, 00:22:14.629 "transport_tos": 0, 00:22:14.629 "nvme_error_stat": false, 00:22:14.629 "rdma_srq_size": 0, 00:22:14.629 "io_path_stat": false, 00:22:14.629 "allow_accel_sequence": false, 00:22:14.629 "rdma_max_cq_size": 0, 00:22:14.629 "rdma_cm_event_timeout_ms": 0, 00:22:14.629 "dhchap_digests": [ 00:22:14.629 "sha256", 00:22:14.629 "sha384", 00:22:14.629 "sha512" 00:22:14.629 ], 00:22:14.629 "dhchap_dhgroups": [ 00:22:14.629 "null", 00:22:14.629 "ffdhe2048", 00:22:14.629 "ffdhe3072", 00:22:14.629 "ffdhe4096", 00:22:14.629 "ffdhe6144", 00:22:14.629 "ffdhe8192" 00:22:14.629 ] 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "bdev_nvme_set_hotplug", 00:22:14.629 "params": { 00:22:14.629 "period_us": 100000, 00:22:14.629 "enable": false 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "bdev_malloc_create", 00:22:14.629 "params": { 00:22:14.629 "name": "malloc0", 00:22:14.629 "num_blocks": 8192, 00:22:14.629 "block_size": 4096, 00:22:14.629 "physical_block_size": 4096, 00:22:14.629 "uuid": "4ca432db-f095-41f9-b731-18559aa4c8bd", 00:22:14.629 "optimal_io_boundary": 0 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "bdev_wait_for_examine" 00:22:14.629 } 00:22:14.629 ] 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "subsystem": "nbd", 00:22:14.629 "config": [] 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "subsystem": "scheduler", 00:22:14.629 "config": [ 00:22:14.629 { 00:22:14.629 "method": "framework_set_scheduler", 00:22:14.629 "params": { 00:22:14.629 "name": "static" 00:22:14.629 } 00:22:14.629 } 00:22:14.629 ] 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "subsystem": "nvmf", 00:22:14.629 "config": [ 00:22:14.629 { 00:22:14.629 "method": "nvmf_set_config", 00:22:14.629 "params": { 00:22:14.629 "discovery_filter": "match_any", 00:22:14.629 "admin_cmd_passthru": { 00:22:14.629 "identify_ctrlr": false 00:22:14.629 } 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "nvmf_set_max_subsystems", 00:22:14.629 "params": { 00:22:14.629 "max_subsystems": 1024 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "nvmf_set_crdt", 00:22:14.629 "params": { 00:22:14.629 "crdt1": 0, 00:22:14.629 "crdt2": 0, 00:22:14.629 "crdt3": 0 00:22:14.629 } 00:22:14.629 }, 00:22:14.629 { 00:22:14.629 "method": "nvmf_create_transport", 00:22:14.629 "params": { 00:22:14.629 "trtype": "TCP", 00:22:14.629 "max_queue_depth": 128, 00:22:14.629 "max_io_qpairs_per_ctrlr": 127, 00:22:14.629 "in_capsule_data_size": 4096, 00:22:14.629 "max_io_size": 131072, 00:22:14.629 "io_unit_size": 131072, 00:22:14.629 "max_aq_depth": 128, 00:22:14.629 "num_shared_buffers": 511, 00:22:14.629 "buf_cache_size": 4294967295, 00:22:14.629 "dif_insert_or_strip": false, 00:22:14.629 "zcopy": false, 00:22:14.629 "c2h_success": false, 00:22:14.629 "sock_priority": 0, 00:22:14.629 "abort_timeout_sec": 1, 00:22:14.629 "ack_timeout": 0, 00:22:14.630 "data_wr_pool_size": 0 00:22:14.630 } 00:22:14.630 }, 00:22:14.630 { 00:22:14.630 "method": "nvmf_create_subsystem", 00:22:14.630 "params": { 00:22:14.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.630 "allow_any_host": false, 00:22:14.630 "serial_number": "00000000000000000000", 00:22:14.630 "model_number": "SPDK bdev Controller", 00:22:14.630 "max_namespaces": 32, 00:22:14.630 "min_cntlid": 1, 00:22:14.630 "max_cntlid": 65519, 00:22:14.630 "ana_reporting": false 00:22:14.630 } 00:22:14.630 }, 00:22:14.630 { 00:22:14.630 "method": "nvmf_subsystem_add_host", 00:22:14.630 "params": { 00:22:14.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.630 "host": "nqn.2016-06.io.spdk:host1", 00:22:14.630 "psk": "key0" 00:22:14.630 } 00:22:14.630 }, 00:22:14.630 { 00:22:14.630 "method": "nvmf_subsystem_add_ns", 00:22:14.630 "params": { 00:22:14.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.630 "namespace": { 00:22:14.630 "nsid": 1, 00:22:14.630 "bdev_name": "malloc0", 00:22:14.630 "nguid": "4CA432DBF09541F9B73118559AA4C8BD", 00:22:14.630 "uuid": "4ca432db-f095-41f9-b731-18559aa4c8bd", 00:22:14.630 "no_auto_visible": false 00:22:14.630 } 00:22:14.630 } 00:22:14.630 }, 00:22:14.630 { 00:22:14.630 "method": "nvmf_subsystem_add_listener", 00:22:14.630 "params": { 00:22:14.630 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.630 "listen_address": { 00:22:14.630 "trtype": "TCP", 00:22:14.630 "adrfam": "IPv4", 00:22:14.630 "traddr": "10.0.0.2", 00:22:14.630 "trsvcid": "4420" 00:22:14.630 }, 00:22:14.630 "secure_channel": true 00:22:14.630 } 00:22:14.630 } 00:22:14.630 ] 00:22:14.630 } 00:22:14.630 ] 00:22:14.630 }' 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1475183 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1475183 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1475183 ']' 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:14.630 19:18:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:14.630 [2024-07-12 19:18:20.614158] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:22:14.630 [2024-07-12 19:18:20.614215] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.630 EAL: No free 2048 kB hugepages reported on node 1 00:22:14.630 [2024-07-12 19:18:20.678598] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.630 [2024-07-12 19:18:20.743110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.630 [2024-07-12 19:18:20.743148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.630 [2024-07-12 19:18:20.743156] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:14.630 [2024-07-12 19:18:20.743162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:14.630 [2024-07-12 19:18:20.743168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.630 [2024-07-12 19:18:20.743216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:14.890 [2024-07-12 19:18:20.940603] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:14.890 [2024-07-12 19:18:20.972612] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:14.890 [2024-07-12 19:18:20.984431] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1475222 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1475222 /var/tmp/bdevperf.sock 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1475222 ']' 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:15.460 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.460 19:18:21 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:15.460 "subsystems": [ 00:22:15.460 { 00:22:15.460 "subsystem": "keyring", 00:22:15.460 "config": [ 00:22:15.460 { 00:22:15.460 "method": "keyring_file_add_key", 00:22:15.460 "params": { 00:22:15.460 "name": "key0", 00:22:15.460 "path": "/tmp/tmp.9cDEgfiRJN" 00:22:15.460 } 00:22:15.460 } 00:22:15.460 ] 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "subsystem": "iobuf", 00:22:15.460 "config": [ 00:22:15.460 { 00:22:15.460 "method": "iobuf_set_options", 00:22:15.460 "params": { 00:22:15.460 "small_pool_count": 8192, 00:22:15.460 "large_pool_count": 1024, 00:22:15.460 "small_bufsize": 8192, 00:22:15.460 "large_bufsize": 135168 00:22:15.460 } 00:22:15.460 } 00:22:15.460 ] 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "subsystem": "sock", 00:22:15.460 "config": [ 00:22:15.460 { 00:22:15.460 "method": "sock_set_default_impl", 00:22:15.460 "params": { 00:22:15.460 "impl_name": "posix" 00:22:15.460 } 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "method": "sock_impl_set_options", 00:22:15.460 "params": { 00:22:15.460 "impl_name": "ssl", 00:22:15.460 "recv_buf_size": 4096, 00:22:15.460 "send_buf_size": 4096, 00:22:15.460 "enable_recv_pipe": true, 00:22:15.460 "enable_quickack": false, 00:22:15.460 "enable_placement_id": 0, 00:22:15.460 "enable_zerocopy_send_server": true, 00:22:15.460 "enable_zerocopy_send_client": false, 00:22:15.460 "zerocopy_threshold": 0, 00:22:15.460 "tls_version": 0, 00:22:15.460 "enable_ktls": false 00:22:15.460 } 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "method": "sock_impl_set_options", 00:22:15.460 "params": { 00:22:15.460 "impl_name": "posix", 00:22:15.460 "recv_buf_size": 2097152, 00:22:15.460 "send_buf_size": 2097152, 00:22:15.460 "enable_recv_pipe": true, 00:22:15.460 "enable_quickack": false, 00:22:15.460 "enable_placement_id": 0, 00:22:15.460 "enable_zerocopy_send_server": true, 00:22:15.460 "enable_zerocopy_send_client": false, 00:22:15.460 "zerocopy_threshold": 0, 00:22:15.460 "tls_version": 0, 00:22:15.460 "enable_ktls": false 00:22:15.460 } 00:22:15.460 } 00:22:15.460 ] 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "subsystem": "vmd", 00:22:15.460 "config": [] 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "subsystem": "accel", 00:22:15.460 "config": [ 00:22:15.460 { 00:22:15.460 "method": "accel_set_options", 00:22:15.460 "params": { 00:22:15.460 "small_cache_size": 128, 00:22:15.460 "large_cache_size": 16, 00:22:15.460 "task_count": 2048, 00:22:15.460 "sequence_count": 2048, 00:22:15.460 "buf_count": 2048 00:22:15.460 } 00:22:15.460 } 00:22:15.460 ] 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "subsystem": "bdev", 00:22:15.460 "config": [ 00:22:15.460 { 00:22:15.460 "method": "bdev_set_options", 00:22:15.460 "params": { 00:22:15.460 "bdev_io_pool_size": 65535, 00:22:15.460 "bdev_io_cache_size": 256, 00:22:15.460 "bdev_auto_examine": true, 00:22:15.460 "iobuf_small_cache_size": 128, 00:22:15.460 "iobuf_large_cache_size": 16 00:22:15.460 } 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "method": "bdev_raid_set_options", 00:22:15.460 "params": { 00:22:15.460 "process_window_size_kb": 1024 00:22:15.460 } 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "method": "bdev_iscsi_set_options", 00:22:15.460 "params": { 00:22:15.460 "timeout_sec": 30 00:22:15.460 } 00:22:15.460 }, 00:22:15.460 { 00:22:15.460 "method": "bdev_nvme_set_options", 00:22:15.460 "params": { 00:22:15.460 "action_on_timeout": "none", 00:22:15.460 "timeout_us": 0, 00:22:15.460 "timeout_admin_us": 0, 00:22:15.460 "keep_alive_timeout_ms": 10000, 00:22:15.460 "arbitration_burst": 0, 00:22:15.460 "low_priority_weight": 0, 00:22:15.460 "medium_priority_weight": 0, 00:22:15.460 "high_priority_weight": 0, 00:22:15.460 "nvme_adminq_poll_period_us": 10000, 00:22:15.460 "nvme_ioq_poll_period_us": 0, 00:22:15.460 "io_queue_requests": 512, 00:22:15.460 "delay_cmd_submit": true, 00:22:15.460 "transport_retry_count": 4, 00:22:15.460 "bdev_retry_count": 3, 00:22:15.460 "transport_ack_timeout": 0, 00:22:15.460 "ctrlr_loss_timeout_sec": 0, 00:22:15.460 "reconnect_delay_sec": 0, 00:22:15.460 "fast_io_fail_timeout_sec": 0, 00:22:15.460 "disable_auto_failback": false, 00:22:15.460 "generate_uuids": false, 00:22:15.460 "transport_tos": 0, 00:22:15.460 "nvme_error_stat": false, 00:22:15.460 "rdma_srq_size": 0, 00:22:15.460 "io_path_stat": false, 00:22:15.460 "allow_accel_sequence": false, 00:22:15.460 "rdma_max_cq_size": 0, 00:22:15.460 "rdma_cm_event_timeout_ms": 0, 00:22:15.460 "dhchap_digests": [ 00:22:15.460 "sha256", 00:22:15.460 "sha384", 00:22:15.460 "sha512" 00:22:15.460 ], 00:22:15.460 "dhchap_dhgroups": [ 00:22:15.460 "null", 00:22:15.460 "ffdhe2048", 00:22:15.460 "ffdhe3072", 00:22:15.460 "ffdhe4096", 00:22:15.460 "ffdhe6144", 00:22:15.460 "ffdhe8192" 00:22:15.460 ] 00:22:15.460 } 00:22:15.461 }, 00:22:15.461 { 00:22:15.461 "method": "bdev_nvme_attach_controller", 00:22:15.461 "params": { 00:22:15.461 "name": "nvme0", 00:22:15.461 "trtype": "TCP", 00:22:15.461 "adrfam": "IPv4", 00:22:15.461 "traddr": "10.0.0.2", 00:22:15.461 "trsvcid": "4420", 00:22:15.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:15.461 "prchk_reftag": false, 00:22:15.461 "prchk_guard": false, 00:22:15.461 "ctrlr_loss_timeout_sec": 0, 00:22:15.461 "reconnect_delay_sec": 0, 00:22:15.461 "fast_io_fail_timeout_sec": 0, 00:22:15.461 "psk": "key0", 00:22:15.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:15.461 "hdgst": false, 00:22:15.461 "ddgst": false 00:22:15.461 } 00:22:15.461 }, 00:22:15.461 { 00:22:15.461 "method": "bdev_nvme_set_hotplug", 00:22:15.461 "params": { 00:22:15.461 "period_us": 100000, 00:22:15.461 "enable": false 00:22:15.461 } 00:22:15.461 }, 00:22:15.461 { 00:22:15.461 "method": "bdev_enable_histogram", 00:22:15.461 "params": { 00:22:15.461 "name": "nvme0n1", 00:22:15.461 "enable": true 00:22:15.461 } 00:22:15.461 }, 00:22:15.461 { 00:22:15.461 "method": "bdev_wait_for_examine" 00:22:15.461 } 00:22:15.461 ] 00:22:15.461 }, 00:22:15.461 { 00:22:15.461 "subsystem": "nbd", 00:22:15.461 "config": [] 00:22:15.461 } 00:22:15.461 ] 00:22:15.461 }' 00:22:15.461 [2024-07-12 19:18:21.471579] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:22:15.461 [2024-07-12 19:18:21.471632] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1475222 ] 00:22:15.461 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.461 [2024-07-12 19:18:21.544367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.721 [2024-07-12 19:18:21.597954] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.721 [2024-07-12 19:18:21.731578] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:16.293 19:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:16.293 19:18:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:16.293 19:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:16.293 19:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:16.293 19:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.293 19:18:22 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:16.554 Running I/O for 1 seconds... 00:22:17.495 00:22:17.495 Latency(us) 00:22:17.495 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.495 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:17.495 Verification LBA range: start 0x0 length 0x2000 00:22:17.495 nvme0n1 : 1.06 2128.57 8.31 0.00 0.00 58603.61 4614.83 100925.44 00:22:17.495 =================================================================================================================== 00:22:17.495 Total : 2128.57 8.31 0.00 0.00 58603.61 4614.83 100925.44 00:22:17.495 0 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:17.495 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:17.495 nvmf_trace.0 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1475222 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1475222 ']' 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1475222 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1475222 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1475222' 00:22:17.756 killing process with pid 1475222 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1475222 00:22:17.756 Received shutdown signal, test time was about 1.000000 seconds 00:22:17.756 00:22:17.756 Latency(us) 00:22:17.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.756 =================================================================================================================== 00:22:17.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1475222 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:17.756 rmmod nvme_tcp 00:22:17.756 rmmod nvme_fabrics 00:22:17.756 rmmod nvme_keyring 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1475183 ']' 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1475183 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1475183 ']' 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1475183 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.756 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1475183 00:22:18.017 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:18.017 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:18.017 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1475183' 00:22:18.017 killing process with pid 1475183 00:22:18.017 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1475183 00:22:18.017 19:18:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1475183 00:22:18.017 19:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:18.017 19:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:18.017 19:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:18.017 19:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.017 19:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:18.017 19:18:24 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.017 19:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:18.017 19:18:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.563 19:18:26 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.563 19:18:26 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.lYZPUnMQQ5 /tmp/tmp.Z3iTg90ui5 /tmp/tmp.9cDEgfiRJN 00:22:20.563 00:22:20.563 real 1m23.458s 00:22:20.563 user 2m7.210s 00:22:20.563 sys 0m28.257s 00:22:20.563 19:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.563 19:18:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.563 ************************************ 00:22:20.563 END TEST nvmf_tls 00:22:20.563 ************************************ 00:22:20.563 19:18:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:20.563 19:18:26 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:20.563 19:18:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:20.563 19:18:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.563 19:18:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:20.563 ************************************ 00:22:20.563 START TEST nvmf_fips 00:22:20.563 ************************************ 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:20.563 * Looking for test storage... 00:22:20.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.563 19:18:26 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:20.564 Error setting digest 00:22:20.564 0052CD35A87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:20.564 0052CD35A87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.564 19:18:26 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.708 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:28.709 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:28.709 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:28.709 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:28.709 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:28.709 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.709 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.542 ms 00:22:28.709 00:22:28.709 --- 10.0.0.2 ping statistics --- 00:22:28.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.709 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:28.709 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.709 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:22:28.709 00:22:28.709 --- 10.0.0.1 ping statistics --- 00:22:28.709 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.709 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1479917 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1479917 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1479917 ']' 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.709 19:18:33 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:28.709 [2024-07-12 19:18:33.728435] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:22:28.709 [2024-07-12 19:18:33.728505] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.709 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.709 [2024-07-12 19:18:33.816559] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.709 [2024-07-12 19:18:33.909093] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.709 [2024-07-12 19:18:33.909152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.709 [2024-07-12 19:18:33.909165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.709 [2024-07-12 19:18:33.909172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.709 [2024-07-12 19:18:33.909178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.709 [2024-07-12 19:18:33.909204] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:28.709 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:28.709 [2024-07-12 19:18:34.689535] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.709 [2024-07-12 19:18:34.705526] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:28.709 [2024-07-12 19:18:34.705812] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.710 [2024-07-12 19:18:34.735607] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:28.710 malloc0 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1480264 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1480264 /var/tmp/bdevperf.sock 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1480264 ']' 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:28.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:28.710 19:18:34 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:28.969 [2024-07-12 19:18:34.838523] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:22:28.969 [2024-07-12 19:18:34.838592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1480264 ] 00:22:28.969 EAL: No free 2048 kB hugepages reported on node 1 00:22:28.969 [2024-07-12 19:18:34.893319] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.969 [2024-07-12 19:18:34.957393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:29.539 19:18:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:29.539 19:18:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:29.540 19:18:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:29.836 [2024-07-12 19:18:35.733221] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.836 [2024-07-12 19:18:35.733283] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:29.836 TLSTESTn1 00:22:29.836 19:18:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:29.836 Running I/O for 10 seconds... 00:22:42.067 00:22:42.067 Latency(us) 00:22:42.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.067 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:42.067 Verification LBA range: start 0x0 length 0x2000 00:22:42.067 TLSTESTn1 : 10.03 3289.93 12.85 0.00 0.00 38841.59 5625.17 50899.63 00:22:42.067 =================================================================================================================== 00:22:42.067 Total : 3289.93 12.85 0.00 0.00 38841.59 5625.17 50899.63 00:22:42.067 0 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:42.067 19:18:45 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:42.067 nvmf_trace.0 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1480264 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1480264 ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1480264 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1480264 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1480264' 00:22:42.067 killing process with pid 1480264 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1480264 00:22:42.067 Received shutdown signal, test time was about 10.000000 seconds 00:22:42.067 00:22:42.067 Latency(us) 00:22:42.067 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.067 =================================================================================================================== 00:22:42.067 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.067 [2024-07-12 19:18:46.143520] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1480264 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:42.067 rmmod nvme_tcp 00:22:42.067 rmmod nvme_fabrics 00:22:42.067 rmmod nvme_keyring 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1479917 ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1479917 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1479917 ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1479917 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1479917 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1479917' 00:22:42.067 killing process with pid 1479917 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1479917 00:22:42.067 [2024-07-12 19:18:46.382511] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1479917 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:42.067 19:18:46 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:42.639 19:18:48 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:42.640 19:18:48 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:42.640 00:22:42.640 real 0m22.388s 00:22:42.640 user 0m23.078s 00:22:42.640 sys 0m10.007s 00:22:42.640 19:18:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:42.640 19:18:48 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:42.640 ************************************ 00:22:42.640 END TEST nvmf_fips 00:22:42.640 ************************************ 00:22:42.640 19:18:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:42.640 19:18:48 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:42.640 19:18:48 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:42.640 19:18:48 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:42.640 19:18:48 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:42.640 19:18:48 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:42.640 19:18:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:50.858 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:50.858 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:50.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:50.858 19:18:55 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:50.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:50.859 19:18:55 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:50.859 19:18:55 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:50.859 19:18:55 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:50.859 19:18:55 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:50.859 19:18:55 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:50.859 19:18:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:50.859 19:18:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.859 19:18:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:50.859 ************************************ 00:22:50.859 START TEST nvmf_perf_adq 00:22:50.859 ************************************ 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:50.859 * Looking for test storage... 00:22:50.859 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:50.859 19:18:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:22:57.467 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:22:57.467 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:22:57.467 Found net devices under 0000:4b:00.0: cvl_0_0 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:22:57.467 Found net devices under 0000:4b:00.1: cvl_0_1 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:57.467 19:19:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:57.728 19:19:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:00.273 19:19:05 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:05.563 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:05.563 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.563 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:05.564 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:05.564 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:05.564 19:19:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:05.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:23:05.564 00:23:05.564 --- 10.0.0.2 ping statistics --- 00:23:05.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.564 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:23:05.564 00:23:05.564 --- 10.0.0.1 ping statistics --- 00:23:05.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.564 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1491987 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1491987 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1491987 ']' 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:05.564 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.564 [2024-07-12 19:19:11.203415] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:23:05.564 [2024-07-12 19:19:11.203481] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.564 EAL: No free 2048 kB hugepages reported on node 1 00:23:05.564 [2024-07-12 19:19:11.273931] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.564 [2024-07-12 19:19:11.350832] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.564 [2024-07-12 19:19:11.350869] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.564 [2024-07-12 19:19:11.350878] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.564 [2024-07-12 19:19:11.350884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.564 [2024-07-12 19:19:11.350890] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.564 [2024-07-12 19:19:11.350942] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.564 [2024-07-12 19:19:11.351052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.564 [2024-07-12 19:19:11.351196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.564 [2024-07-12 19:19:11.351196] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.134 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.134 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:06.134 19:19:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.134 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:06.134 19:19:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 [2024-07-12 19:19:12.163143] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 Malloc1 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.134 [2024-07-12 19:19:12.222562] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1492183 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:23:06.134 19:19:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:06.134 EAL: No free 2048 kB hugepages reported on node 1 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:23:08.677 "tick_rate": 2400000000, 00:23:08.677 "poll_groups": [ 00:23:08.677 { 00:23:08.677 "name": "nvmf_tgt_poll_group_000", 00:23:08.677 "admin_qpairs": 1, 00:23:08.677 "io_qpairs": 1, 00:23:08.677 "current_admin_qpairs": 1, 00:23:08.677 "current_io_qpairs": 1, 00:23:08.677 "pending_bdev_io": 0, 00:23:08.677 "completed_nvme_io": 19934, 00:23:08.677 "transports": [ 00:23:08.677 { 00:23:08.677 "trtype": "TCP" 00:23:08.677 } 00:23:08.677 ] 00:23:08.677 }, 00:23:08.677 { 00:23:08.677 "name": "nvmf_tgt_poll_group_001", 00:23:08.677 "admin_qpairs": 0, 00:23:08.677 "io_qpairs": 1, 00:23:08.677 "current_admin_qpairs": 0, 00:23:08.677 "current_io_qpairs": 1, 00:23:08.677 "pending_bdev_io": 0, 00:23:08.677 "completed_nvme_io": 29336, 00:23:08.677 "transports": [ 00:23:08.677 { 00:23:08.677 "trtype": "TCP" 00:23:08.677 } 00:23:08.677 ] 00:23:08.677 }, 00:23:08.677 { 00:23:08.677 "name": "nvmf_tgt_poll_group_002", 00:23:08.677 "admin_qpairs": 0, 00:23:08.677 "io_qpairs": 1, 00:23:08.677 "current_admin_qpairs": 0, 00:23:08.677 "current_io_qpairs": 1, 00:23:08.677 "pending_bdev_io": 0, 00:23:08.677 "completed_nvme_io": 21405, 00:23:08.677 "transports": [ 00:23:08.677 { 00:23:08.677 "trtype": "TCP" 00:23:08.677 } 00:23:08.677 ] 00:23:08.677 }, 00:23:08.677 { 00:23:08.677 "name": "nvmf_tgt_poll_group_003", 00:23:08.677 "admin_qpairs": 0, 00:23:08.677 "io_qpairs": 1, 00:23:08.677 "current_admin_qpairs": 0, 00:23:08.677 "current_io_qpairs": 1, 00:23:08.677 "pending_bdev_io": 0, 00:23:08.677 "completed_nvme_io": 19611, 00:23:08.677 "transports": [ 00:23:08.677 { 00:23:08.677 "trtype": "TCP" 00:23:08.677 } 00:23:08.677 ] 00:23:08.677 } 00:23:08.677 ] 00:23:08.677 }' 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:23:08.677 19:19:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1492183 00:23:16.817 Initializing NVMe Controllers 00:23:16.817 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:16.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:16.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:16.817 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:16.817 Initialization complete. Launching workers. 00:23:16.817 ======================================================== 00:23:16.817 Latency(us) 00:23:16.817 Device Information : IOPS MiB/s Average min max 00:23:16.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11105.80 43.38 5762.96 1306.81 9246.57 00:23:16.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15039.60 58.75 4254.75 1008.45 9354.73 00:23:16.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 14448.50 56.44 4429.04 1231.98 11490.96 00:23:16.817 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 13710.20 53.56 4667.88 1132.78 10639.34 00:23:16.817 ======================================================== 00:23:16.817 Total : 54304.08 212.13 4713.87 1008.45 11490.96 00:23:16.817 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:16.817 rmmod nvme_tcp 00:23:16.817 rmmod nvme_fabrics 00:23:16.817 rmmod nvme_keyring 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1491987 ']' 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1491987 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1491987 ']' 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1491987 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1491987 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1491987' 00:23:16.817 killing process with pid 1491987 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1491987 00:23:16.817 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1491987 00:23:16.818 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:16.818 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:16.818 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:16.818 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:16.818 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:16.818 19:19:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:16.818 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:16.818 19:19:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:18.730 19:19:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:18.730 19:19:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:18.730 19:19:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:20.663 19:19:26 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:22.578 19:19:28 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:27.868 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:27.868 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:27.868 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:27.868 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:27.868 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:27.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:23:27.869 00:23:27.869 --- 10.0.0.2 ping statistics --- 00:23:27.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.869 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:27.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:23:27.869 00:23:27.869 --- 10.0.0.1 ping statistics --- 00:23:27.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.869 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:27.869 net.core.busy_poll = 1 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:27.869 net.core.busy_read = 1 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1496863 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1496863 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1496863 ']' 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:27.869 19:19:33 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.869 [2024-07-12 19:19:33.985117] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:23:27.869 [2024-07-12 19:19:33.985205] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:28.130 EAL: No free 2048 kB hugepages reported on node 1 00:23:28.130 [2024-07-12 19:19:34.055742] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:28.130 [2024-07-12 19:19:34.131126] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:28.130 [2024-07-12 19:19:34.131164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:28.130 [2024-07-12 19:19:34.131172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:28.130 [2024-07-12 19:19:34.131183] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:28.130 [2024-07-12 19:19:34.131188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:28.130 [2024-07-12 19:19:34.131266] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:28.130 [2024-07-12 19:19:34.131385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:28.130 [2024-07-12 19:19:34.131544] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:28.130 [2024-07-12 19:19:34.131545] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.702 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.963 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.964 [2024-07-12 19:19:34.937435] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.964 Malloc1 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:28.964 19:19:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.964 [2024-07-12 19:19:34.996749] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.964 19:19:35 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:28.964 19:19:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1496996 00:23:28.964 19:19:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:28.964 19:19:35 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:28.964 EAL: No free 2048 kB hugepages reported on node 1 00:23:31.509 19:19:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:31.509 19:19:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:31.509 19:19:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.509 19:19:37 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:31.509 19:19:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:31.509 "tick_rate": 2400000000, 00:23:31.509 "poll_groups": [ 00:23:31.509 { 00:23:31.509 "name": "nvmf_tgt_poll_group_000", 00:23:31.509 "admin_qpairs": 1, 00:23:31.509 "io_qpairs": 3, 00:23:31.509 "current_admin_qpairs": 1, 00:23:31.509 "current_io_qpairs": 3, 00:23:31.509 "pending_bdev_io": 0, 00:23:31.509 "completed_nvme_io": 30492, 00:23:31.509 "transports": [ 00:23:31.509 { 00:23:31.509 "trtype": "TCP" 00:23:31.509 } 00:23:31.509 ] 00:23:31.509 }, 00:23:31.510 { 00:23:31.510 "name": "nvmf_tgt_poll_group_001", 00:23:31.510 "admin_qpairs": 0, 00:23:31.510 "io_qpairs": 1, 00:23:31.510 "current_admin_qpairs": 0, 00:23:31.510 "current_io_qpairs": 1, 00:23:31.510 "pending_bdev_io": 0, 00:23:31.510 "completed_nvme_io": 37790, 00:23:31.510 "transports": [ 00:23:31.510 { 00:23:31.510 "trtype": "TCP" 00:23:31.510 } 00:23:31.510 ] 00:23:31.510 }, 00:23:31.510 { 00:23:31.510 "name": "nvmf_tgt_poll_group_002", 00:23:31.510 "admin_qpairs": 0, 00:23:31.510 "io_qpairs": 0, 00:23:31.510 "current_admin_qpairs": 0, 00:23:31.510 "current_io_qpairs": 0, 00:23:31.510 "pending_bdev_io": 0, 00:23:31.510 "completed_nvme_io": 0, 00:23:31.510 "transports": [ 00:23:31.510 { 00:23:31.510 "trtype": "TCP" 00:23:31.510 } 00:23:31.510 ] 00:23:31.510 }, 00:23:31.510 { 00:23:31.510 "name": "nvmf_tgt_poll_group_003", 00:23:31.510 "admin_qpairs": 0, 00:23:31.510 "io_qpairs": 0, 00:23:31.510 "current_admin_qpairs": 0, 00:23:31.510 "current_io_qpairs": 0, 00:23:31.510 "pending_bdev_io": 0, 00:23:31.510 "completed_nvme_io": 0, 00:23:31.510 "transports": [ 00:23:31.510 { 00:23:31.510 "trtype": "TCP" 00:23:31.510 } 00:23:31.510 ] 00:23:31.510 } 00:23:31.510 ] 00:23:31.510 }' 00:23:31.510 19:19:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:31.510 19:19:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:31.510 19:19:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:31.510 19:19:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:31.510 19:19:37 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1496996 00:23:39.685 Initializing NVMe Controllers 00:23:39.685 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:39.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:39.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:39.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:39.685 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:39.685 Initialization complete. Launching workers. 00:23:39.685 ======================================================== 00:23:39.685 Latency(us) 00:23:39.685 Device Information : IOPS MiB/s Average min max 00:23:39.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4858.40 18.98 13215.10 1535.86 60412.54 00:23:39.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 8552.70 33.41 7483.03 1100.64 56350.42 00:23:39.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7006.30 27.37 9134.00 1168.29 59268.87 00:23:39.685 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 19567.20 76.43 3276.77 1193.36 45251.02 00:23:39.685 ======================================================== 00:23:39.685 Total : 39984.60 156.19 6410.40 1100.64 60412.54 00:23:39.685 00:23:39.685 19:19:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:39.686 rmmod nvme_tcp 00:23:39.686 rmmod nvme_fabrics 00:23:39.686 rmmod nvme_keyring 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1496863 ']' 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1496863 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1496863 ']' 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1496863 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1496863 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1496863' 00:23:39.686 killing process with pid 1496863 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1496863 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1496863 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:39.686 19:19:45 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.986 19:19:48 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:42.986 19:19:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:42.986 00:23:42.986 real 0m53.067s 00:23:42.986 user 2m47.689s 00:23:42.986 sys 0m11.514s 00:23:42.986 19:19:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:42.986 19:19:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:42.986 ************************************ 00:23:42.986 END TEST nvmf_perf_adq 00:23:42.986 ************************************ 00:23:42.986 19:19:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:42.986 19:19:48 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:42.986 19:19:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:42.986 19:19:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.986 19:19:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:42.986 ************************************ 00:23:42.986 START TEST nvmf_shutdown 00:23:42.986 ************************************ 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:42.986 * Looking for test storage... 00:23:42.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:42.986 ************************************ 00:23:42.986 START TEST nvmf_shutdown_tc1 00:23:42.986 ************************************ 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:42.986 19:19:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:49.574 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:49.575 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:49.575 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:49.575 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:49.575 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:49.575 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:49.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:49.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.444 ms 00:23:49.838 00:23:49.838 --- 10.0.0.2 ping statistics --- 00:23:49.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.838 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:49.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:49.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:23:49.838 00:23:49.838 --- 10.0.0.1 ping statistics --- 00:23:49.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:49.838 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1503457 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1503457 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1503457 ']' 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.838 19:19:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:49.838 [2024-07-12 19:19:55.863705] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:23:49.838 [2024-07-12 19:19:55.863756] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:49.838 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.838 [2024-07-12 19:19:55.946239] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:50.099 [2024-07-12 19:19:56.006389] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.099 [2024-07-12 19:19:56.006426] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.099 [2024-07-12 19:19:56.006432] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.099 [2024-07-12 19:19:56.006436] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.100 [2024-07-12 19:19:56.006440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.100 [2024-07-12 19:19:56.006557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:50.100 [2024-07-12 19:19:56.006718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:50.100 [2024-07-12 19:19:56.006873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:50.100 [2024-07-12 19:19:56.006876] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:50.673 [2024-07-12 19:19:56.686369] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:50.673 19:19:56 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:50.673 Malloc1 00:23:50.673 [2024-07-12 19:19:56.785914] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.933 Malloc2 00:23:50.934 Malloc3 00:23:50.934 Malloc4 00:23:50.934 Malloc5 00:23:50.934 Malloc6 00:23:50.934 Malloc7 00:23:50.934 Malloc8 00:23:51.195 Malloc9 00:23:51.195 Malloc10 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1503841 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1503841 /var/tmp/bdevperf.sock 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1503841 ']' 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.195 { 00:23:51.195 "params": { 00:23:51.195 "name": "Nvme$subsystem", 00:23:51.195 "trtype": "$TEST_TRANSPORT", 00:23:51.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.195 "adrfam": "ipv4", 00:23:51.195 "trsvcid": "$NVMF_PORT", 00:23:51.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.195 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.195 "hdgst": ${hdgst:-false}, 00:23:51.195 "ddgst": ${ddgst:-false} 00:23:51.195 }, 00:23:51.195 "method": "bdev_nvme_attach_controller" 00:23:51.195 } 00:23:51.195 EOF 00:23:51.195 )") 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.195 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.195 { 00:23:51.195 "params": { 00:23:51.195 "name": "Nvme$subsystem", 00:23:51.195 "trtype": "$TEST_TRANSPORT", 00:23:51.195 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.195 "adrfam": "ipv4", 00:23:51.195 "trsvcid": "$NVMF_PORT", 00:23:51.195 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.196 { 00:23:51.196 "params": { 00:23:51.196 "name": "Nvme$subsystem", 00:23:51.196 "trtype": "$TEST_TRANSPORT", 00:23:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.196 "adrfam": "ipv4", 00:23:51.196 "trsvcid": "$NVMF_PORT", 00:23:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.196 { 00:23:51.196 "params": { 00:23:51.196 "name": "Nvme$subsystem", 00:23:51.196 "trtype": "$TEST_TRANSPORT", 00:23:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.196 "adrfam": "ipv4", 00:23:51.196 "trsvcid": "$NVMF_PORT", 00:23:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.196 { 00:23:51.196 "params": { 00:23:51.196 "name": "Nvme$subsystem", 00:23:51.196 "trtype": "$TEST_TRANSPORT", 00:23:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.196 "adrfam": "ipv4", 00:23:51.196 "trsvcid": "$NVMF_PORT", 00:23:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.196 { 00:23:51.196 "params": { 00:23:51.196 "name": "Nvme$subsystem", 00:23:51.196 "trtype": "$TEST_TRANSPORT", 00:23:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.196 "adrfam": "ipv4", 00:23:51.196 "trsvcid": "$NVMF_PORT", 00:23:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.196 [2024-07-12 19:19:57.230350] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:23:51.196 [2024-07-12 19:19:57.230404] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.196 { 00:23:51.196 "params": { 00:23:51.196 "name": "Nvme$subsystem", 00:23:51.196 "trtype": "$TEST_TRANSPORT", 00:23:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.196 "adrfam": "ipv4", 00:23:51.196 "trsvcid": "$NVMF_PORT", 00:23:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.196 { 00:23:51.196 "params": { 00:23:51.196 "name": "Nvme$subsystem", 00:23:51.196 "trtype": "$TEST_TRANSPORT", 00:23:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.196 "adrfam": "ipv4", 00:23:51.196 "trsvcid": "$NVMF_PORT", 00:23:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.196 { 00:23:51.196 "params": { 00:23:51.196 "name": "Nvme$subsystem", 00:23:51.196 "trtype": "$TEST_TRANSPORT", 00:23:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.196 "adrfam": "ipv4", 00:23:51.196 "trsvcid": "$NVMF_PORT", 00:23:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:51.196 { 00:23:51.196 "params": { 00:23:51.196 "name": "Nvme$subsystem", 00:23:51.196 "trtype": "$TEST_TRANSPORT", 00:23:51.196 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.196 "adrfam": "ipv4", 00:23:51.196 "trsvcid": "$NVMF_PORT", 00:23:51.196 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.196 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.196 "hdgst": ${hdgst:-false}, 00:23:51.196 "ddgst": ${ddgst:-false} 00:23:51.196 }, 00:23:51.196 "method": "bdev_nvme_attach_controller" 00:23:51.196 } 00:23:51.196 EOF 00:23:51.196 )") 00:23:51.196 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.196 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:51.197 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:51.197 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:51.197 19:19:57 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme1", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme2", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme3", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme4", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme5", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme6", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme7", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme8", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme9", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 },{ 00:23:51.197 "params": { 00:23:51.197 "name": "Nvme10", 00:23:51.197 "trtype": "tcp", 00:23:51.197 "traddr": "10.0.0.2", 00:23:51.197 "adrfam": "ipv4", 00:23:51.197 "trsvcid": "4420", 00:23:51.197 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:51.197 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:51.197 "hdgst": false, 00:23:51.197 "ddgst": false 00:23:51.197 }, 00:23:51.197 "method": "bdev_nvme_attach_controller" 00:23:51.197 }' 00:23:51.197 [2024-07-12 19:19:57.290387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.458 [2024-07-12 19:19:57.355236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1503841 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:52.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1503841 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:52.846 19:19:58 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1503457 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.789 { 00:23:53.789 "params": { 00:23:53.789 "name": "Nvme$subsystem", 00:23:53.789 "trtype": "$TEST_TRANSPORT", 00:23:53.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.789 "adrfam": "ipv4", 00:23:53.789 "trsvcid": "$NVMF_PORT", 00:23:53.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.789 "hdgst": ${hdgst:-false}, 00:23:53.789 "ddgst": ${ddgst:-false} 00:23:53.789 }, 00:23:53.789 "method": "bdev_nvme_attach_controller" 00:23:53.789 } 00:23:53.789 EOF 00:23:53.789 )") 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.789 { 00:23:53.789 "params": { 00:23:53.789 "name": "Nvme$subsystem", 00:23:53.789 "trtype": "$TEST_TRANSPORT", 00:23:53.789 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.789 "adrfam": "ipv4", 00:23:53.789 "trsvcid": "$NVMF_PORT", 00:23:53.789 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.789 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.789 "hdgst": ${hdgst:-false}, 00:23:53.789 "ddgst": ${ddgst:-false} 00:23:53.789 }, 00:23:53.789 "method": "bdev_nvme_attach_controller" 00:23:53.789 } 00:23:53.789 EOF 00:23:53.789 )") 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.789 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.790 { 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme$subsystem", 00:23:53.790 "trtype": "$TEST_TRANSPORT", 00:23:53.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "$NVMF_PORT", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.790 "hdgst": ${hdgst:-false}, 00:23:53.790 "ddgst": ${ddgst:-false} 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 } 00:23:53.790 EOF 00:23:53.790 )") 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.790 { 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme$subsystem", 00:23:53.790 "trtype": "$TEST_TRANSPORT", 00:23:53.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "$NVMF_PORT", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.790 "hdgst": ${hdgst:-false}, 00:23:53.790 "ddgst": ${ddgst:-false} 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 } 00:23:53.790 EOF 00:23:53.790 )") 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.790 { 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme$subsystem", 00:23:53.790 "trtype": "$TEST_TRANSPORT", 00:23:53.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "$NVMF_PORT", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.790 "hdgst": ${hdgst:-false}, 00:23:53.790 "ddgst": ${ddgst:-false} 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 } 00:23:53.790 EOF 00:23:53.790 )") 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.790 { 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme$subsystem", 00:23:53.790 "trtype": "$TEST_TRANSPORT", 00:23:53.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "$NVMF_PORT", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.790 "hdgst": ${hdgst:-false}, 00:23:53.790 "ddgst": ${ddgst:-false} 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 } 00:23:53.790 EOF 00:23:53.790 )") 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.790 [2024-07-12 19:19:59.640843] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:23:53.790 [2024-07-12 19:19:59.640895] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1504226 ] 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.790 { 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme$subsystem", 00:23:53.790 "trtype": "$TEST_TRANSPORT", 00:23:53.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "$NVMF_PORT", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.790 "hdgst": ${hdgst:-false}, 00:23:53.790 "ddgst": ${ddgst:-false} 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 } 00:23:53.790 EOF 00:23:53.790 )") 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.790 { 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme$subsystem", 00:23:53.790 "trtype": "$TEST_TRANSPORT", 00:23:53.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "$NVMF_PORT", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.790 "hdgst": ${hdgst:-false}, 00:23:53.790 "ddgst": ${ddgst:-false} 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 } 00:23:53.790 EOF 00:23:53.790 )") 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.790 { 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme$subsystem", 00:23:53.790 "trtype": "$TEST_TRANSPORT", 00:23:53.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "$NVMF_PORT", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.790 "hdgst": ${hdgst:-false}, 00:23:53.790 "ddgst": ${ddgst:-false} 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 } 00:23:53.790 EOF 00:23:53.790 )") 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:53.790 { 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme$subsystem", 00:23:53.790 "trtype": "$TEST_TRANSPORT", 00:23:53.790 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "$NVMF_PORT", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:53.790 "hdgst": ${hdgst:-false}, 00:23:53.790 "ddgst": ${ddgst:-false} 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 } 00:23:53.790 EOF 00:23:53.790 )") 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:53.790 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:53.790 19:19:59 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme1", 00:23:53.790 "trtype": "tcp", 00:23:53.790 "traddr": "10.0.0.2", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "4420", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:53.790 "hdgst": false, 00:23:53.790 "ddgst": false 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 },{ 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme2", 00:23:53.790 "trtype": "tcp", 00:23:53.790 "traddr": "10.0.0.2", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "4420", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:53.790 "hdgst": false, 00:23:53.790 "ddgst": false 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 },{ 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme3", 00:23:53.790 "trtype": "tcp", 00:23:53.790 "traddr": "10.0.0.2", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "4420", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:53.790 "hdgst": false, 00:23:53.790 "ddgst": false 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 },{ 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme4", 00:23:53.790 "trtype": "tcp", 00:23:53.790 "traddr": "10.0.0.2", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "4420", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:53.790 "hdgst": false, 00:23:53.790 "ddgst": false 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 },{ 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme5", 00:23:53.790 "trtype": "tcp", 00:23:53.790 "traddr": "10.0.0.2", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "4420", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:53.790 "hdgst": false, 00:23:53.790 "ddgst": false 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.790 },{ 00:23:53.790 "params": { 00:23:53.790 "name": "Nvme6", 00:23:53.790 "trtype": "tcp", 00:23:53.790 "traddr": "10.0.0.2", 00:23:53.790 "adrfam": "ipv4", 00:23:53.790 "trsvcid": "4420", 00:23:53.790 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:53.790 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:53.790 "hdgst": false, 00:23:53.790 "ddgst": false 00:23:53.790 }, 00:23:53.790 "method": "bdev_nvme_attach_controller" 00:23:53.791 },{ 00:23:53.791 "params": { 00:23:53.791 "name": "Nvme7", 00:23:53.791 "trtype": "tcp", 00:23:53.791 "traddr": "10.0.0.2", 00:23:53.791 "adrfam": "ipv4", 00:23:53.791 "trsvcid": "4420", 00:23:53.791 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:53.791 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:53.791 "hdgst": false, 00:23:53.791 "ddgst": false 00:23:53.791 }, 00:23:53.791 "method": "bdev_nvme_attach_controller" 00:23:53.791 },{ 00:23:53.791 "params": { 00:23:53.791 "name": "Nvme8", 00:23:53.791 "trtype": "tcp", 00:23:53.791 "traddr": "10.0.0.2", 00:23:53.791 "adrfam": "ipv4", 00:23:53.791 "trsvcid": "4420", 00:23:53.791 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:53.791 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:53.791 "hdgst": false, 00:23:53.791 "ddgst": false 00:23:53.791 }, 00:23:53.791 "method": "bdev_nvme_attach_controller" 00:23:53.791 },{ 00:23:53.791 "params": { 00:23:53.791 "name": "Nvme9", 00:23:53.791 "trtype": "tcp", 00:23:53.791 "traddr": "10.0.0.2", 00:23:53.791 "adrfam": "ipv4", 00:23:53.791 "trsvcid": "4420", 00:23:53.791 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:53.791 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:53.791 "hdgst": false, 00:23:53.791 "ddgst": false 00:23:53.791 }, 00:23:53.791 "method": "bdev_nvme_attach_controller" 00:23:53.791 },{ 00:23:53.791 "params": { 00:23:53.791 "name": "Nvme10", 00:23:53.791 "trtype": "tcp", 00:23:53.791 "traddr": "10.0.0.2", 00:23:53.791 "adrfam": "ipv4", 00:23:53.791 "trsvcid": "4420", 00:23:53.791 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:53.791 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:53.791 "hdgst": false, 00:23:53.791 "ddgst": false 00:23:53.791 }, 00:23:53.791 "method": "bdev_nvme_attach_controller" 00:23:53.791 }' 00:23:53.791 [2024-07-12 19:19:59.701662] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.791 [2024-07-12 19:19:59.766400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.181 Running I/O for 1 seconds... 00:23:56.565 00:23:56.565 Latency(us) 00:23:56.565 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.565 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme1n1 : 1.10 232.37 14.52 0.00 0.00 272356.48 21299.20 253405.87 00:23:56.565 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme2n1 : 1.15 223.45 13.97 0.00 0.00 278650.24 24029.87 276125.01 00:23:56.565 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme3n1 : 1.09 239.04 14.94 0.00 0.00 249665.38 20425.39 225443.84 00:23:56.565 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme4n1 : 1.13 226.04 14.13 0.00 0.00 264819.63 20425.39 256901.12 00:23:56.565 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme5n1 : 1.16 276.83 17.30 0.00 0.00 213334.53 15182.51 248162.99 00:23:56.565 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme6n1 : 1.16 220.63 13.79 0.00 0.00 262618.67 22828.37 276125.01 00:23:56.565 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme7n1 : 1.14 224.30 14.02 0.00 0.00 253430.19 21080.75 249910.61 00:23:56.565 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme8n1 : 1.15 222.44 13.90 0.00 0.00 251035.09 15728.64 255153.49 00:23:56.565 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme9n1 : 1.19 269.82 16.86 0.00 0.00 203933.18 13325.65 219327.15 00:23:56.565 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:56.565 Verification LBA range: start 0x0 length 0x400 00:23:56.565 Nvme10n1 : 1.21 263.40 16.46 0.00 0.00 205806.51 12779.52 281367.89 00:23:56.565 =================================================================================================================== 00:23:56.565 Total : 2398.31 149.89 0.00 0.00 242934.87 12779.52 281367.89 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:56.565 rmmod nvme_tcp 00:23:56.565 rmmod nvme_fabrics 00:23:56.565 rmmod nvme_keyring 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1503457 ']' 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1503457 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1503457 ']' 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1503457 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1503457 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1503457' 00:23:56.565 killing process with pid 1503457 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1503457 00:23:56.565 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1503457 00:23:56.826 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:56.826 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:56.826 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:56.826 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:56.826 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:56.826 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.826 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.826 19:20:02 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.741 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:58.741 00:23:58.741 real 0m16.071s 00:23:58.741 user 0m33.122s 00:23:58.741 sys 0m6.150s 00:23:58.741 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:58.741 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:58.741 ************************************ 00:23:58.741 END TEST nvmf_shutdown_tc1 00:23:58.741 ************************************ 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:59.003 ************************************ 00:23:59.003 START TEST nvmf_shutdown_tc2 00:23:59.003 ************************************ 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:23:59.003 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:23:59.003 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:23:59.003 Found net devices under 0000:4b:00.0: cvl_0_0 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:23:59.003 Found net devices under 0000:4b:00.1: cvl_0_1 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:59.003 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:59.004 19:20:04 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:59.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.487 ms 00:23:59.265 00:23:59.265 --- 10.0.0.2 ping statistics --- 00:23:59.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.265 rtt min/avg/max/mdev = 0.487/0.487/0.487/0.000 ms 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:59.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.229 ms 00:23:59.265 00:23:59.265 --- 10.0.0.1 ping statistics --- 00:23:59.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.265 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1505535 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1505535 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1505535 ']' 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.265 19:20:05 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.265 [2024-07-12 19:20:05.394094] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:23:59.265 [2024-07-12 19:20:05.394211] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:59.527 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.527 [2024-07-12 19:20:05.487247] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:59.527 [2024-07-12 19:20:05.548714] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:59.527 [2024-07-12 19:20:05.548746] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:59.527 [2024-07-12 19:20:05.548751] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:59.527 [2024-07-12 19:20:05.548756] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:59.527 [2024-07-12 19:20:05.548760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:59.527 [2024-07-12 19:20:05.548877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:59.527 [2024-07-12 19:20:05.549043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.527 [2024-07-12 19:20:05.549187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:59.527 [2024-07-12 19:20:05.549189] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.099 [2024-07-12 19:20:06.218241] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:00.099 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.360 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:00.360 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.360 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.360 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.360 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.360 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.360 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.360 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.361 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.361 Malloc1 00:24:00.361 [2024-07-12 19:20:06.316874] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:00.361 Malloc2 00:24:00.361 Malloc3 00:24:00.361 Malloc4 00:24:00.361 Malloc5 00:24:00.361 Malloc6 00:24:00.622 Malloc7 00:24:00.622 Malloc8 00:24:00.622 Malloc9 00:24:00.622 Malloc10 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1505742 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1505742 /var/tmp/bdevperf.sock 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1505742 ']' 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:00.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.622 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.622 { 00:24:00.622 "params": { 00:24:00.622 "name": "Nvme$subsystem", 00:24:00.622 "trtype": "$TEST_TRANSPORT", 00:24:00.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.622 "adrfam": "ipv4", 00:24:00.622 "trsvcid": "$NVMF_PORT", 00:24:00.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.622 "hdgst": ${hdgst:-false}, 00:24:00.622 "ddgst": ${ddgst:-false} 00:24:00.622 }, 00:24:00.622 "method": "bdev_nvme_attach_controller" 00:24:00.622 } 00:24:00.622 EOF 00:24:00.622 )") 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.623 { 00:24:00.623 "params": { 00:24:00.623 "name": "Nvme$subsystem", 00:24:00.623 "trtype": "$TEST_TRANSPORT", 00:24:00.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.623 "adrfam": "ipv4", 00:24:00.623 "trsvcid": "$NVMF_PORT", 00:24:00.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.623 "hdgst": ${hdgst:-false}, 00:24:00.623 "ddgst": ${ddgst:-false} 00:24:00.623 }, 00:24:00.623 "method": "bdev_nvme_attach_controller" 00:24:00.623 } 00:24:00.623 EOF 00:24:00.623 )") 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.623 { 00:24:00.623 "params": { 00:24:00.623 "name": "Nvme$subsystem", 00:24:00.623 "trtype": "$TEST_TRANSPORT", 00:24:00.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.623 "adrfam": "ipv4", 00:24:00.623 "trsvcid": "$NVMF_PORT", 00:24:00.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.623 "hdgst": ${hdgst:-false}, 00:24:00.623 "ddgst": ${ddgst:-false} 00:24:00.623 }, 00:24:00.623 "method": "bdev_nvme_attach_controller" 00:24:00.623 } 00:24:00.623 EOF 00:24:00.623 )") 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.623 { 00:24:00.623 "params": { 00:24:00.623 "name": "Nvme$subsystem", 00:24:00.623 "trtype": "$TEST_TRANSPORT", 00:24:00.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.623 "adrfam": "ipv4", 00:24:00.623 "trsvcid": "$NVMF_PORT", 00:24:00.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.623 "hdgst": ${hdgst:-false}, 00:24:00.623 "ddgst": ${ddgst:-false} 00:24:00.623 }, 00:24:00.623 "method": "bdev_nvme_attach_controller" 00:24:00.623 } 00:24:00.623 EOF 00:24:00.623 )") 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.623 { 00:24:00.623 "params": { 00:24:00.623 "name": "Nvme$subsystem", 00:24:00.623 "trtype": "$TEST_TRANSPORT", 00:24:00.623 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.623 "adrfam": "ipv4", 00:24:00.623 "trsvcid": "$NVMF_PORT", 00:24:00.623 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.623 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.623 "hdgst": ${hdgst:-false}, 00:24:00.623 "ddgst": ${ddgst:-false} 00:24:00.623 }, 00:24:00.623 "method": "bdev_nvme_attach_controller" 00:24:00.623 } 00:24:00.623 EOF 00:24:00.623 )") 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.623 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.884 { 00:24:00.884 "params": { 00:24:00.884 "name": "Nvme$subsystem", 00:24:00.884 "trtype": "$TEST_TRANSPORT", 00:24:00.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.884 "adrfam": "ipv4", 00:24:00.884 "trsvcid": "$NVMF_PORT", 00:24:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.884 "hdgst": ${hdgst:-false}, 00:24:00.884 "ddgst": ${ddgst:-false} 00:24:00.884 }, 00:24:00.884 "method": "bdev_nvme_attach_controller" 00:24:00.884 } 00:24:00.884 EOF 00:24:00.884 )") 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.884 [2024-07-12 19:20:06.758046] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:00.884 [2024-07-12 19:20:06.758099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505742 ] 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.884 { 00:24:00.884 "params": { 00:24:00.884 "name": "Nvme$subsystem", 00:24:00.884 "trtype": "$TEST_TRANSPORT", 00:24:00.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.884 "adrfam": "ipv4", 00:24:00.884 "trsvcid": "$NVMF_PORT", 00:24:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.884 "hdgst": ${hdgst:-false}, 00:24:00.884 "ddgst": ${ddgst:-false} 00:24:00.884 }, 00:24:00.884 "method": "bdev_nvme_attach_controller" 00:24:00.884 } 00:24:00.884 EOF 00:24:00.884 )") 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.884 { 00:24:00.884 "params": { 00:24:00.884 "name": "Nvme$subsystem", 00:24:00.884 "trtype": "$TEST_TRANSPORT", 00:24:00.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.884 "adrfam": "ipv4", 00:24:00.884 "trsvcid": "$NVMF_PORT", 00:24:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.884 "hdgst": ${hdgst:-false}, 00:24:00.884 "ddgst": ${ddgst:-false} 00:24:00.884 }, 00:24:00.884 "method": "bdev_nvme_attach_controller" 00:24:00.884 } 00:24:00.884 EOF 00:24:00.884 )") 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.884 { 00:24:00.884 "params": { 00:24:00.884 "name": "Nvme$subsystem", 00:24:00.884 "trtype": "$TEST_TRANSPORT", 00:24:00.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.884 "adrfam": "ipv4", 00:24:00.884 "trsvcid": "$NVMF_PORT", 00:24:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.884 "hdgst": ${hdgst:-false}, 00:24:00.884 "ddgst": ${ddgst:-false} 00:24:00.884 }, 00:24:00.884 "method": "bdev_nvme_attach_controller" 00:24:00.884 } 00:24:00.884 EOF 00:24:00.884 )") 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:00.884 { 00:24:00.884 "params": { 00:24:00.884 "name": "Nvme$subsystem", 00:24:00.884 "trtype": "$TEST_TRANSPORT", 00:24:00.884 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:00.884 "adrfam": "ipv4", 00:24:00.884 "trsvcid": "$NVMF_PORT", 00:24:00.884 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:00.884 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:00.884 "hdgst": ${hdgst:-false}, 00:24:00.884 "ddgst": ${ddgst:-false} 00:24:00.884 }, 00:24:00.884 "method": "bdev_nvme_attach_controller" 00:24:00.884 } 00:24:00.884 EOF 00:24:00.884 )") 00:24:00.884 EAL: No free 2048 kB hugepages reported on node 1 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:24:00.884 19:20:06 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:00.884 "params": { 00:24:00.884 "name": "Nvme1", 00:24:00.884 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme2", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme3", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme4", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme5", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme6", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme7", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme8", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme9", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 },{ 00:24:00.885 "params": { 00:24:00.885 "name": "Nvme10", 00:24:00.885 "trtype": "tcp", 00:24:00.885 "traddr": "10.0.0.2", 00:24:00.885 "adrfam": "ipv4", 00:24:00.885 "trsvcid": "4420", 00:24:00.885 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:00.885 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:00.885 "hdgst": false, 00:24:00.885 "ddgst": false 00:24:00.885 }, 00:24:00.885 "method": "bdev_nvme_attach_controller" 00:24:00.885 }' 00:24:00.885 [2024-07-12 19:20:06.817509] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:00.885 [2024-07-12 19:20:06.882081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:02.268 Running I/O for 10 seconds... 00:24:02.268 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:02.268 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:24:02.268 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:02.268 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.268 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:02.528 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.528 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:02.528 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:02.528 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:02.528 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:24:02.528 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:24:02.528 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:02.528 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:02.529 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:02.529 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:02.529 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.529 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:02.529 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.529 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:02.529 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:02.529 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:24:02.790 19:20:08 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:03.051 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1505742 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1505742 ']' 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1505742 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1505742 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1505742' 00:24:03.052 killing process with pid 1505742 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1505742 00:24:03.052 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1505742 00:24:03.312 Received shutdown signal, test time was about 0.965915 seconds 00:24:03.312 00:24:03.312 Latency(us) 00:24:03.312 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.312 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme1n1 : 0.97 263.21 16.45 0.00 0.00 239950.35 20643.84 244667.73 00:24:03.312 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme2n1 : 0.94 204.92 12.81 0.00 0.00 301998.93 22282.24 251658.24 00:24:03.312 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme3n1 : 0.96 267.77 16.74 0.00 0.00 226731.52 19879.25 244667.73 00:24:03.312 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme4n1 : 0.95 270.10 16.88 0.00 0.00 219920.21 20862.29 249910.61 00:24:03.312 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme5n1 : 0.95 269.03 16.81 0.00 0.00 216182.19 18677.76 242920.11 00:24:03.312 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme6n1 : 0.96 266.02 16.63 0.00 0.00 214130.77 21189.97 241172.48 00:24:03.312 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme7n1 : 0.94 203.55 12.72 0.00 0.00 272582.26 22391.47 244667.73 00:24:03.312 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme8n1 : 0.93 274.10 17.13 0.00 0.00 197681.92 13107.20 244667.73 00:24:03.312 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme9n1 : 0.92 208.16 13.01 0.00 0.00 253607.54 18240.85 241172.48 00:24:03.312 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:03.312 Verification LBA range: start 0x0 length 0x400 00:24:03.312 Nvme10n1 : 0.94 203.79 12.74 0.00 0.00 253600.14 22173.01 265639.25 00:24:03.312 =================================================================================================================== 00:24:03.312 Total : 2430.63 151.91 0.00 0.00 236212.16 13107.20 265639.25 00:24:03.312 19:20:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1505535 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:04.254 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:04.513 rmmod nvme_tcp 00:24:04.513 rmmod nvme_fabrics 00:24:04.513 rmmod nvme_keyring 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1505535 ']' 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1505535 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1505535 ']' 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1505535 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1505535 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1505535' 00:24:04.513 killing process with pid 1505535 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1505535 00:24:04.513 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1505535 00:24:04.774 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:04.774 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:04.774 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:04.774 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:04.774 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:04.774 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.774 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.774 19:20:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.706 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:06.706 00:24:06.706 real 0m7.855s 00:24:06.706 user 0m23.518s 00:24:06.706 sys 0m1.222s 00:24:06.706 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:06.706 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:06.706 ************************************ 00:24:06.706 END TEST nvmf_shutdown_tc2 00:24:06.706 ************************************ 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:07.002 ************************************ 00:24:07.002 START TEST nvmf_shutdown_tc3 00:24:07.002 ************************************ 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:07.002 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:07.002 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:07.002 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:07.002 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:07.002 19:20:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:07.002 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:07.002 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:07.002 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:07.002 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:07.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.456 ms 00:24:07.263 00:24:07.263 --- 10.0.0.2 ping statistics --- 00:24:07.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.263 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:07.263 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.263 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:24:07.263 00:24:07.263 --- 10.0.0.1 ping statistics --- 00:24:07.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.263 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1507163 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1507163 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1507163 ']' 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.263 19:20:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.263 [2024-07-12 19:20:13.311320] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:07.263 [2024-07-12 19:20:13.311389] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.263 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.524 [2024-07-12 19:20:13.402459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:07.524 [2024-07-12 19:20:13.463171] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.524 [2024-07-12 19:20:13.463205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.524 [2024-07-12 19:20:13.463210] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.524 [2024-07-12 19:20:13.463215] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.524 [2024-07-12 19:20:13.463219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.524 [2024-07-12 19:20:13.463353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:07.524 [2024-07-12 19:20:13.463520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:07.524 [2024-07-12 19:20:13.463636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.524 [2024-07-12 19:20:13.463639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.096 [2024-07-12 19:20:14.139743] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:08.096 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.096 Malloc1 00:24:08.357 [2024-07-12 19:20:14.238340] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.357 Malloc2 00:24:08.357 Malloc3 00:24:08.357 Malloc4 00:24:08.357 Malloc5 00:24:08.357 Malloc6 00:24:08.357 Malloc7 00:24:08.619 Malloc8 00:24:08.619 Malloc9 00:24:08.619 Malloc10 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1507542 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1507542 /var/tmp/bdevperf.sock 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1507542 ']' 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.619 { 00:24:08.619 "params": { 00:24:08.619 "name": "Nvme$subsystem", 00:24:08.619 "trtype": "$TEST_TRANSPORT", 00:24:08.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.619 "adrfam": "ipv4", 00:24:08.619 "trsvcid": "$NVMF_PORT", 00:24:08.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.619 "hdgst": ${hdgst:-false}, 00:24:08.619 "ddgst": ${ddgst:-false} 00:24:08.619 }, 00:24:08.619 "method": "bdev_nvme_attach_controller" 00:24:08.619 } 00:24:08.619 EOF 00:24:08.619 )") 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.619 { 00:24:08.619 "params": { 00:24:08.619 "name": "Nvme$subsystem", 00:24:08.619 "trtype": "$TEST_TRANSPORT", 00:24:08.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.619 "adrfam": "ipv4", 00:24:08.619 "trsvcid": "$NVMF_PORT", 00:24:08.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.619 "hdgst": ${hdgst:-false}, 00:24:08.619 "ddgst": ${ddgst:-false} 00:24:08.619 }, 00:24:08.619 "method": "bdev_nvme_attach_controller" 00:24:08.619 } 00:24:08.619 EOF 00:24:08.619 )") 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.619 { 00:24:08.619 "params": { 00:24:08.619 "name": "Nvme$subsystem", 00:24:08.619 "trtype": "$TEST_TRANSPORT", 00:24:08.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.619 "adrfam": "ipv4", 00:24:08.619 "trsvcid": "$NVMF_PORT", 00:24:08.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.619 "hdgst": ${hdgst:-false}, 00:24:08.619 "ddgst": ${ddgst:-false} 00:24:08.619 }, 00:24:08.619 "method": "bdev_nvme_attach_controller" 00:24:08.619 } 00:24:08.619 EOF 00:24:08.619 )") 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.619 { 00:24:08.619 "params": { 00:24:08.619 "name": "Nvme$subsystem", 00:24:08.619 "trtype": "$TEST_TRANSPORT", 00:24:08.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.619 "adrfam": "ipv4", 00:24:08.619 "trsvcid": "$NVMF_PORT", 00:24:08.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.619 "hdgst": ${hdgst:-false}, 00:24:08.619 "ddgst": ${ddgst:-false} 00:24:08.619 }, 00:24:08.619 "method": "bdev_nvme_attach_controller" 00:24:08.619 } 00:24:08.619 EOF 00:24:08.619 )") 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.619 { 00:24:08.619 "params": { 00:24:08.619 "name": "Nvme$subsystem", 00:24:08.619 "trtype": "$TEST_TRANSPORT", 00:24:08.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.619 "adrfam": "ipv4", 00:24:08.619 "trsvcid": "$NVMF_PORT", 00:24:08.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.619 "hdgst": ${hdgst:-false}, 00:24:08.619 "ddgst": ${ddgst:-false} 00:24:08.619 }, 00:24:08.619 "method": "bdev_nvme_attach_controller" 00:24:08.619 } 00:24:08.619 EOF 00:24:08.619 )") 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.619 { 00:24:08.619 "params": { 00:24:08.619 "name": "Nvme$subsystem", 00:24:08.619 "trtype": "$TEST_TRANSPORT", 00:24:08.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.619 "adrfam": "ipv4", 00:24:08.619 "trsvcid": "$NVMF_PORT", 00:24:08.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.619 "hdgst": ${hdgst:-false}, 00:24:08.619 "ddgst": ${ddgst:-false} 00:24:08.619 }, 00:24:08.619 "method": "bdev_nvme_attach_controller" 00:24:08.619 } 00:24:08.619 EOF 00:24:08.619 )") 00:24:08.619 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.620 [2024-07-12 19:20:14.682400] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:08.620 [2024-07-12 19:20:14.682453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507542 ] 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.620 { 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme$subsystem", 00:24:08.620 "trtype": "$TEST_TRANSPORT", 00:24:08.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "$NVMF_PORT", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.620 "hdgst": ${hdgst:-false}, 00:24:08.620 "ddgst": ${ddgst:-false} 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 } 00:24:08.620 EOF 00:24:08.620 )") 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.620 { 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme$subsystem", 00:24:08.620 "trtype": "$TEST_TRANSPORT", 00:24:08.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "$NVMF_PORT", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.620 "hdgst": ${hdgst:-false}, 00:24:08.620 "ddgst": ${ddgst:-false} 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 } 00:24:08.620 EOF 00:24:08.620 )") 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.620 { 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme$subsystem", 00:24:08.620 "trtype": "$TEST_TRANSPORT", 00:24:08.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "$NVMF_PORT", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.620 "hdgst": ${hdgst:-false}, 00:24:08.620 "ddgst": ${ddgst:-false} 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 } 00:24:08.620 EOF 00:24:08.620 )") 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:08.620 { 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme$subsystem", 00:24:08.620 "trtype": "$TEST_TRANSPORT", 00:24:08.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "$NVMF_PORT", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:08.620 "hdgst": ${hdgst:-false}, 00:24:08.620 "ddgst": ${ddgst:-false} 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 } 00:24:08.620 EOF 00:24:08.620 )") 00:24:08.620 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:24:08.620 19:20:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme1", 00:24:08.620 "trtype": "tcp", 00:24:08.620 "traddr": "10.0.0.2", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "4420", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:08.620 "hdgst": false, 00:24:08.620 "ddgst": false 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 },{ 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme2", 00:24:08.620 "trtype": "tcp", 00:24:08.620 "traddr": "10.0.0.2", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "4420", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:08.620 "hdgst": false, 00:24:08.620 "ddgst": false 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 },{ 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme3", 00:24:08.620 "trtype": "tcp", 00:24:08.620 "traddr": "10.0.0.2", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "4420", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:08.620 "hdgst": false, 00:24:08.620 "ddgst": false 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 },{ 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme4", 00:24:08.620 "trtype": "tcp", 00:24:08.620 "traddr": "10.0.0.2", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "4420", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:08.620 "hdgst": false, 00:24:08.620 "ddgst": false 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 },{ 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme5", 00:24:08.620 "trtype": "tcp", 00:24:08.620 "traddr": "10.0.0.2", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "4420", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:08.620 "hdgst": false, 00:24:08.620 "ddgst": false 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 },{ 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme6", 00:24:08.620 "trtype": "tcp", 00:24:08.620 "traddr": "10.0.0.2", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "4420", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:08.620 "hdgst": false, 00:24:08.620 "ddgst": false 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 },{ 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme7", 00:24:08.620 "trtype": "tcp", 00:24:08.620 "traddr": "10.0.0.2", 00:24:08.620 "adrfam": "ipv4", 00:24:08.620 "trsvcid": "4420", 00:24:08.620 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:08.620 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:08.620 "hdgst": false, 00:24:08.620 "ddgst": false 00:24:08.620 }, 00:24:08.620 "method": "bdev_nvme_attach_controller" 00:24:08.620 },{ 00:24:08.620 "params": { 00:24:08.620 "name": "Nvme8", 00:24:08.620 "trtype": "tcp", 00:24:08.620 "traddr": "10.0.0.2", 00:24:08.620 "adrfam": "ipv4", 00:24:08.621 "trsvcid": "4420", 00:24:08.621 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:08.621 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:08.621 "hdgst": false, 00:24:08.621 "ddgst": false 00:24:08.621 }, 00:24:08.621 "method": "bdev_nvme_attach_controller" 00:24:08.621 },{ 00:24:08.621 "params": { 00:24:08.621 "name": "Nvme9", 00:24:08.621 "trtype": "tcp", 00:24:08.621 "traddr": "10.0.0.2", 00:24:08.621 "adrfam": "ipv4", 00:24:08.621 "trsvcid": "4420", 00:24:08.621 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:08.621 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:08.621 "hdgst": false, 00:24:08.621 "ddgst": false 00:24:08.621 }, 00:24:08.621 "method": "bdev_nvme_attach_controller" 00:24:08.621 },{ 00:24:08.621 "params": { 00:24:08.621 "name": "Nvme10", 00:24:08.621 "trtype": "tcp", 00:24:08.621 "traddr": "10.0.0.2", 00:24:08.621 "adrfam": "ipv4", 00:24:08.621 "trsvcid": "4420", 00:24:08.621 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:08.621 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:08.621 "hdgst": false, 00:24:08.621 "ddgst": false 00:24:08.621 }, 00:24:08.621 "method": "bdev_nvme_attach_controller" 00:24:08.621 }' 00:24:08.621 [2024-07-12 19:20:14.742061] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.881 [2024-07-12 19:20:14.806501] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.265 Running I/O for 10 seconds... 00:24:10.265 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.265 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:10.265 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:10.265 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.265 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:24:10.526 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=71 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 71 -ge 100 ']' 00:24:10.787 19:20:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=193 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 193 -ge 100 ']' 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1507163 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1507163 ']' 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1507163 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1507163 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1507163' 00:24:11.057 killing process with pid 1507163 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1507163 00:24:11.057 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1507163 00:24:11.057 [2024-07-12 19:20:17.133933] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.133981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.133987] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.133992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.133997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134074] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.057 [2024-07-12 19:20:17.134106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134134] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134147] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134186] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134240] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.134267] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861ac0 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135253] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135268] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135292] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135296] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135321] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135377] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135425] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.058 [2024-07-12 19:20:17.135513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.135517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.135522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.135526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.135530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.135535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.135539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98900 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136445] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136461] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136470] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136499] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136514] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136532] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136716] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.136731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x861fa0 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137955] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137974] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.137997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.138001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.138006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.138011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.138015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.138020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.138028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.138032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.059 [2024-07-12 19:20:17.138037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138049] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138114] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138119] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138136] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138183] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138204] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138217] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.138227] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x862480 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140149] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140162] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140172] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140244] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140257] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140280] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140298] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140340] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140358] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.060 [2024-07-12 19:20:17.140381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.140444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863360 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141422] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141480] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141498] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141521] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141544] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141553] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141571] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141603] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141616] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141630] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141654] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.061 [2024-07-12 19:20:17.141668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.141672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863860 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142357] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142363] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142367] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142376] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142386] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142417] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-12 19:20:17.142430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 he state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with t[2024-07-12 19:20:17.142474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:24:11.062 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142492] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns[2024-07-12 19:20:17.142502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 he state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142510] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with t[2024-07-12 19:20:17.142519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25cc740 is same he state(5) to be set 00:24:11.062 with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142531] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-12 19:20:17.142563] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with tid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 he state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-12 19:20:17.142606] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 he state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with t[2024-07-12 19:20:17.142622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 che state(5) to be set 00:24:11.062 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d21d0 is same [2024-07-12 19:20:17.142634] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with twith the state(5) to be set 00:24:11.062 he state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.062 [2024-07-12 19:20:17.142726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bd040 is same with the state(5) to be set 00:24:11.062 [2024-07-12 19:20:17.142748] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.062 [2024-07-12 19:20:17.142757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25dac20 is same with the state(5) to be set 00:24:11.063 [2024-07-12 19:20:17.142833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b1b10 is same with the state(5) to be set 00:24:11.063 [2024-07-12 19:20:17.142921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142954] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.142976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.142983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258d5d0 is same with the state(5) to be set 00:24:11.063 [2024-07-12 19:20:17.143004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143052] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d0a40 is same with the state(5) to be set 00:24:11.063 [2024-07-12 19:20:17.143090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2759260 is same with the state(5) to be set 00:24:11.063 [2024-07-12 19:20:17.143188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143235] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.063 [2024-07-12 19:20:17.143243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df340 is same with the state(5) to be set 00:24:11.063 [2024-07-12 19:20:17.143795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.143990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.143997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.144008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.144015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.144024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.144031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.144040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.144047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.144057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.144064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.144074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.144081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.063 [2024-07-12 19:20:17.144090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.063 [2024-07-12 19:20:17.144097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.064 [2024-07-12 19:20:17.144817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.064 [2024-07-12 19:20:17.144824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.144833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.144840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.144849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.144856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.144866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.144872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.144881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.144889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.144943] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2b6bf80 was disconnected and freed. reset controller. 00:24:11.065 [2024-07-12 19:20:17.145341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.065 [2024-07-12 19:20:17.145892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.065 [2024-07-12 19:20:17.145901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.145910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.145918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.145927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.152828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.152848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.152855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x863d40 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153344] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153374] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153405] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153410] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153430] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153444] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153457] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153505] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153523] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153559] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153573] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153586] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153619] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.153633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa98420 is same with the state(5) to be set 00:24:11.066 [2024-07-12 19:20:17.162598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.066 [2024-07-12 19:20:17.162816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.066 [2024-07-12 19:20:17.162823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.162989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.162996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.163162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.163231] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x257ce90 was disconnected and freed. reset controller. 00:24:11.067 [2024-07-12 19:20:17.164656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:11.067 [2024-07-12 19:20:17.164695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20df340 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.164730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25cc740 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.164767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.067 [2024-07-12 19:20:17.164778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.164788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.067 [2024-07-12 19:20:17.164796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.164804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.067 [2024-07-12 19:20:17.164812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.164821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:11.067 [2024-07-12 19:20:17.164828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.164836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e65b0 is same with the state(5) to be set 00:24:11.067 [2024-07-12 19:20:17.164853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d21d0 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.164872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25bd040 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.164886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25dac20 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.164904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25b1b10 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.164919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258d5d0 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.164935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d0a40 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.164953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2759260 (9): Bad file descriptor 00:24:11.067 [2024-07-12 19:20:17.166525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:11.067 [2024-07-12 19:20:17.167538] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.067 [2024-07-12 19:20:17.167587] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.067 [2024-07-12 19:20:17.167704] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.067 [2024-07-12 19:20:17.168137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-07-12 19:20:17.168156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20df340 with addr=10.0.0.2, port=4420 00:24:11.067 [2024-07-12 19:20:17.168165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df340 is same with the state(5) to be set 00:24:11.067 [2024-07-12 19:20:17.168640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.067 [2024-07-12 19:20:17.168679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25bd040 with addr=10.0.0.2, port=4420 00:24:11.067 [2024-07-12 19:20:17.168691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bd040 is same with the state(5) to be set 00:24:11.067 [2024-07-12 19:20:17.168746] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.067 [2024-07-12 19:20:17.168889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.168906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.168924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.168932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.168943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.168950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.168960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.168968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.168978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.168985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.168996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.169003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.169013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.169021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.169031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.169038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.169052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.169060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.067 [2024-07-12 19:20:17.169070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.067 [2024-07-12 19:20:17.169078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.169087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.169095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.169104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.169112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.169121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27c6f20 is same with the state(5) to be set 00:24:11.068 [2024-07-12 19:20:17.169178] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x27c6f20 was disconnected and freed. reset controller. 00:24:11.068 [2024-07-12 19:20:17.169631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20df340 (9): Bad file descriptor 00:24:11.068 [2024-07-12 19:20:17.169652] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25bd040 (9): Bad file descriptor 00:24:11.068 [2024-07-12 19:20:17.170705] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.068 [2024-07-12 19:20:17.170752] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:11.068 [2024-07-12 19:20:17.170771] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:11.068 [2024-07-12 19:20:17.170797] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:11.068 [2024-07-12 19:20:17.170805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:11.068 [2024-07-12 19:20:17.170816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:11.068 [2024-07-12 19:20:17.170831] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:11.068 [2024-07-12 19:20:17.170839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:11.068 [2024-07-12 19:20:17.170847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:11.068 [2024-07-12 19:20:17.170899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.170912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.170924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.170932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.170942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.170950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.170960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.170971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.170982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.170989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.170998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.068 [2024-07-12 19:20:17.171508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.068 [2024-07-12 19:20:17.171515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.171991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.171998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.172008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.172017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.069 [2024-07-12 19:20:17.172025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26eb990 is same with the state(5) to be set 00:24:11.069 [2024-07-12 19:20:17.172065] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26eb990 was disconnected and freed. reset controller. 00:24:11.069 [2024-07-12 19:20:17.172117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.069 [2024-07-12 19:20:17.172136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.069 [2024-07-12 19:20:17.172719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.069 [2024-07-12 19:20:17.172761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2759260 with addr=10.0.0.2, port=4420 00:24:11.069 [2024-07-12 19:20:17.172773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2759260 is same with the state(5) to be set 00:24:11.069 [2024-07-12 19:20:17.174305] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:11.069 [2024-07-12 19:20:17.174345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2759260 (9): Bad file descriptor 00:24:11.069 [2024-07-12 19:20:17.174604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.069 [2024-07-12 19:20:17.174620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25b1b10 with addr=10.0.0.2, port=4420 00:24:11.069 [2024-07-12 19:20:17.174627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b1b10 is same with the state(5) to be set 00:24:11.069 [2024-07-12 19:20:17.174635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:11.069 [2024-07-12 19:20:17.174642] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:11.069 [2024-07-12 19:20:17.174650] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:11.069 [2024-07-12 19:20:17.174957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.069 [2024-07-12 19:20:17.174970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25b1b10 (9): Bad file descriptor 00:24:11.069 [2024-07-12 19:20:17.174988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e65b0 (9): Bad file descriptor 00:24:11.069 [2024-07-12 19:20:17.175084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:11.069 [2024-07-12 19:20:17.175093] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:11.069 [2024-07-12 19:20:17.175100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:11.069 [2024-07-12 19:20:17.175145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.069 [2024-07-12 19:20:17.175157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.070 [2024-07-12 19:20:17.175876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.070 [2024-07-12 19:20:17.175885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.175893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.175903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.175911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.175921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.175930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.175940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.175948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.175957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.175965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.175975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.175982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.175992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.175999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.176287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.176296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27c6d40 is same with the state(5) to be set 00:24:11.071 [2024-07-12 19:20:17.177571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.071 [2024-07-12 19:20:17.177896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.071 [2024-07-12 19:20:17.177903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.177914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.177923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.177937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.177945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.177956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.177964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.177974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.177982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.177992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.338 [2024-07-12 19:20:17.178656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.338 [2024-07-12 19:20:17.178665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.178673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.178682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.178691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.178700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.178708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.178716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.178724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.178734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.178741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.178750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2d13990 is same with the state(5) to be set 00:24:11.339 [2024-07-12 19:20:17.180022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.339 [2024-07-12 19:20:17.180679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.339 [2024-07-12 19:20:17.180688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.180987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.180995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.181156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.181165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2ebb530 is same with the state(5) to be set 00:24:11.340 [2024-07-12 19:20:17.182430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.340 [2024-07-12 19:20:17.182703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.340 [2024-07-12 19:20:17.182713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.182992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.182999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.341 [2024-07-12 19:20:17.183460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.341 [2024-07-12 19:20:17.183470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.183477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.183487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.183495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.183504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.183511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.183521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.183528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.183538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.183545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.183560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.183567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.183576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x30630d0 is same with the state(5) to be set 00:24:11.342 [2024-07-12 19:20:17.184859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.184874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.184887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.184897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.184907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.184914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.184924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.184932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.184942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.184949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.184960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.184967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.184977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.184985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.184995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.342 [2024-07-12 19:20:17.185379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.342 [2024-07-12 19:20:17.185389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.185986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.343 [2024-07-12 19:20:17.185994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.343 [2024-07-12 19:20:17.186002] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27c02b0 is same with the state(5) to be set 00:24:11.343 [2024-07-12 19:20:17.188307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.343 [2024-07-12 19:20:17.188330] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.343 [2024-07-12 19:20:17.188342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:11.343 [2024-07-12 19:20:17.188351] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:11.343 [2024-07-12 19:20:17.188425] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.343 [2024-07-12 19:20:17.188441] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.343 [2024-07-12 19:20:17.188504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:11.343 [2024-07-12 19:20:17.188515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:11.343 [2024-07-12 19:20:17.188878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.343 [2024-07-12 19:20:17.188892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x258d5d0 with addr=10.0.0.2, port=4420 00:24:11.343 [2024-07-12 19:20:17.188900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x258d5d0 is same with the state(5) to be set 00:24:11.343 [2024-07-12 19:20:17.189278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.343 [2024-07-12 19:20:17.189290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25dac20 with addr=10.0.0.2, port=4420 00:24:11.343 [2024-07-12 19:20:17.189297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25dac20 is same with the state(5) to be set 00:24:11.343 [2024-07-12 19:20:17.189702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.343 [2024-07-12 19:20:17.189713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25cc740 with addr=10.0.0.2, port=4420 00:24:11.343 [2024-07-12 19:20:17.189720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25cc740 is same with the state(5) to be set 00:24:11.343 [2024-07-12 19:20:17.190784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.190989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.190999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.344 [2024-07-12 19:20:17.191538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.344 [2024-07-12 19:20:17.191548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.345 [2024-07-12 19:20:17.191900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.345 [2024-07-12 19:20:17.191908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x27bedb0 is same with the state(5) to be set 00:24:11.345 [2024-07-12 19:20:17.193629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:11.345 [2024-07-12 19:20:17.193656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:11.345 [2024-07-12 19:20:17.193665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:11.345 [2024-07-12 19:20:17.193674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:11.345 task offset: 24576 on job bdev=Nvme5n1 fails 00:24:11.345 00:24:11.345 Latency(us) 00:24:11.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.345 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme1n1 ended in about 0.93 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme1n1 : 0.93 209.95 13.12 68.56 0.00 227216.04 9120.43 242920.11 00:24:11.345 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme2n1 ended in about 0.93 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme2n1 : 0.93 205.03 12.81 12.95 0.00 283632.72 24357.55 241172.48 00:24:11.345 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme3n1 ended in about 0.93 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme3n1 : 0.93 206.38 12.90 68.79 0.00 220474.67 22063.79 249910.61 00:24:11.345 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme4n1 ended in about 0.92 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme4n1 : 0.92 208.16 13.01 69.39 0.00 213733.55 19988.48 246415.36 00:24:11.345 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme5n1 ended in about 0.92 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme5n1 : 0.92 208.59 13.04 69.53 0.00 208524.59 20753.07 244667.73 00:24:11.345 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme6n1 ended in about 0.94 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme6n1 : 0.94 136.75 8.55 68.38 0.00 276995.41 19660.80 244667.73 00:24:11.345 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme7n1 ended in about 0.94 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme7n1 : 0.94 204.60 12.79 68.20 0.00 203552.00 12342.61 248162.99 00:24:11.345 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme8n1 ended in about 0.94 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme8n1 : 0.94 136.05 8.50 68.03 0.00 265958.68 25995.95 258648.75 00:24:11.345 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme9n1 ended in about 0.95 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme9n1 : 0.95 140.13 8.76 67.43 0.00 255666.36 19442.35 246415.36 00:24:11.345 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:11.345 Job: Nvme10n1 ended in about 0.94 seconds with error 00:24:11.345 Verification LBA range: start 0x0 length 0x400 00:24:11.345 Nvme10n1 : 0.94 135.70 8.48 67.85 0.00 254233.03 21299.20 270882.13 00:24:11.345 =================================================================================================================== 00:24:11.345 Total : 1791.36 111.96 629.10 0.00 237470.16 9120.43 270882.13 00:24:11.345 [2024-07-12 19:20:17.217697] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:11.345 [2024-07-12 19:20:17.217743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:11.345 [2024-07-12 19:20:17.218301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.345 [2024-07-12 19:20:17.218321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d0a40 with addr=10.0.0.2, port=4420 00:24:11.345 [2024-07-12 19:20:17.218331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d0a40 is same with the state(5) to be set 00:24:11.345 [2024-07-12 19:20:17.218629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.345 [2024-07-12 19:20:17.218640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25d21d0 with addr=10.0.0.2, port=4420 00:24:11.345 [2024-07-12 19:20:17.218647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25d21d0 is same with the state(5) to be set 00:24:11.345 [2024-07-12 19:20:17.218660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x258d5d0 (9): Bad file descriptor 00:24:11.345 [2024-07-12 19:20:17.218672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25dac20 (9): Bad file descriptor 00:24:11.345 [2024-07-12 19:20:17.218681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25cc740 (9): Bad file descriptor 00:24:11.345 [2024-07-12 19:20:17.219233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.345 [2024-07-12 19:20:17.219249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25bd040 with addr=10.0.0.2, port=4420 00:24:11.345 [2024-07-12 19:20:17.219256] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25bd040 is same with the state(5) to be set 00:24:11.345 [2024-07-12 19:20:17.219653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.345 [2024-07-12 19:20:17.219664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20df340 with addr=10.0.0.2, port=4420 00:24:11.345 [2024-07-12 19:20:17.219671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20df340 is same with the state(5) to be set 00:24:11.346 [2024-07-12 19:20:17.219888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.346 [2024-07-12 19:20:17.219900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2759260 with addr=10.0.0.2, port=4420 00:24:11.346 [2024-07-12 19:20:17.219907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2759260 is same with the state(5) to be set 00:24:11.346 [2024-07-12 19:20:17.220294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.346 [2024-07-12 19:20:17.220305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25b1b10 with addr=10.0.0.2, port=4420 00:24:11.346 [2024-07-12 19:20:17.220312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25b1b10 is same with the state(5) to be set 00:24:11.346 [2024-07-12 19:20:17.220709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.346 [2024-07-12 19:20:17.220720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x25e65b0 with addr=10.0.0.2, port=4420 00:24:11.346 [2024-07-12 19:20:17.220727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25e65b0 is same with the state(5) to be set 00:24:11.346 [2024-07-12 19:20:17.220736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d0a40 (9): Bad file descriptor 00:24:11.346 [2024-07-12 19:20:17.220745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25d21d0 (9): Bad file descriptor 00:24:11.346 [2024-07-12 19:20:17.220754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:11.346 [2024-07-12 19:20:17.220760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:11.346 [2024-07-12 19:20:17.220769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.346 [2024-07-12 19:20:17.220782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:11.346 [2024-07-12 19:20:17.220800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:11.346 [2024-07-12 19:20:17.220807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:11.346 [2024-07-12 19:20:17.220817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:11.346 [2024-07-12 19:20:17.220824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:11.346 [2024-07-12 19:20:17.220830] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:11.346 [2024-07-12 19:20:17.220858] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.346 [2024-07-12 19:20:17.220870] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.346 [2024-07-12 19:20:17.220881] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.346 [2024-07-12 19:20:17.220894] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.346 [2024-07-12 19:20:17.220904] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:11.346 [2024-07-12 19:20:17.221238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.346 [2024-07-12 19:20:17.221249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 [2024-07-12 19:20:17.221256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 [2024-07-12 19:20:17.221264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25bd040 (9): Bad file descriptor 00:24:11.347 [2024-07-12 19:20:17.221274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20df340 (9): Bad file descriptor 00:24:11.347 [2024-07-12 19:20:17.221284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2759260 (9): Bad file descriptor 00:24:11.347 [2024-07-12 19:20:17.221293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25b1b10 (9): Bad file descriptor 00:24:11.347 [2024-07-12 19:20:17.221303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25e65b0 (9): Bad file descriptor 00:24:11.347 [2024-07-12 19:20:17.221312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:11.347 [2024-07-12 19:20:17.221319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:11.347 [2024-07-12 19:20:17.221326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:11.347 [2024-07-12 19:20:17.221336] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:11.347 [2024-07-12 19:20:17.221342] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:11.347 [2024-07-12 19:20:17.221349] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:11.347 [2024-07-12 19:20:17.221774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 [2024-07-12 19:20:17.221786] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 [2024-07-12 19:20:17.221792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:11.347 [2024-07-12 19:20:17.221799] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:11.347 [2024-07-12 19:20:17.221806] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:11.347 [2024-07-12 19:20:17.221816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:11.347 [2024-07-12 19:20:17.221825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:11.347 [2024-07-12 19:20:17.221832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:11.347 [2024-07-12 19:20:17.221841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:11.347 [2024-07-12 19:20:17.221847] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:11.347 [2024-07-12 19:20:17.221854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:11.347 [2024-07-12 19:20:17.221863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:11.347 [2024-07-12 19:20:17.221869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:11.347 [2024-07-12 19:20:17.221876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:11.347 [2024-07-12 19:20:17.221886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:11.347 [2024-07-12 19:20:17.221893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:11.347 [2024-07-12 19:20:17.221900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:11.347 [2024-07-12 19:20:17.221936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 [2024-07-12 19:20:17.221943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 [2024-07-12 19:20:17.221949] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 [2024-07-12 19:20:17.221955] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 [2024-07-12 19:20:17.221961] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.347 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:11.347 19:20:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:12.288 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1507542 00:24:12.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1507542) - No such process 00:24:12.288 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:12.288 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:12.288 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:12.288 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:12.288 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:12.288 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:12.288 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:12.548 rmmod nvme_tcp 00:24:12.548 rmmod nvme_fabrics 00:24:12.548 rmmod nvme_keyring 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.548 19:20:18 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.455 19:20:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:14.455 00:24:14.455 real 0m7.670s 00:24:14.455 user 0m18.456s 00:24:14.455 sys 0m1.227s 00:24:14.455 19:20:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.455 19:20:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:14.455 ************************************ 00:24:14.455 END TEST nvmf_shutdown_tc3 00:24:14.455 ************************************ 00:24:14.715 19:20:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:14.715 19:20:20 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:14.715 00:24:14.715 real 0m31.972s 00:24:14.715 user 1m15.251s 00:24:14.715 sys 0m8.843s 00:24:14.715 19:20:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:14.715 19:20:20 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:14.715 ************************************ 00:24:14.715 END TEST nvmf_shutdown 00:24:14.715 ************************************ 00:24:14.715 19:20:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:14.715 19:20:20 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:14.715 19:20:20 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:14.715 19:20:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.715 19:20:20 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:14.715 19:20:20 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:14.715 19:20:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.715 19:20:20 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:14.715 19:20:20 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:14.715 19:20:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:14.715 19:20:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:14.715 19:20:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:14.715 ************************************ 00:24:14.715 START TEST nvmf_multicontroller 00:24:14.715 ************************************ 00:24:14.715 19:20:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:14.715 * Looking for test storage... 00:24:14.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:14.715 19:20:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:14.715 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:14.976 19:20:20 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:21.564 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:21.564 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:21.564 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:21.564 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:21.564 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:21.825 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:21.825 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:21.825 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:21.825 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:21.825 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:21.826 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:21.826 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.416 ms 00:24:21.826 00:24:21.826 --- 10.0.0.2 ping statistics --- 00:24:21.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.826 rtt min/avg/max/mdev = 0.416/0.416/0.416/0.000 ms 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:21.826 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:21.826 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.437 ms 00:24:21.826 00:24:21.826 --- 10.0.0.1 ping statistics --- 00:24:21.826 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:21.826 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1512367 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1512367 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1512367 ']' 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:21.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:21.826 19:20:27 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.087 [2024-07-12 19:20:27.974550] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:22.087 [2024-07-12 19:20:27.974619] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.087 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.087 [2024-07-12 19:20:28.061934] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:22.087 [2024-07-12 19:20:28.155935] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.087 [2024-07-12 19:20:28.155997] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.087 [2024-07-12 19:20:28.156005] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.087 [2024-07-12 19:20:28.156012] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.087 [2024-07-12 19:20:28.156018] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.087 [2024-07-12 19:20:28.156170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.087 [2024-07-12 19:20:28.156390] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.087 [2024-07-12 19:20:28.156392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.658 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 [2024-07-12 19:20:28.789969] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 Malloc0 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 [2024-07-12 19:20:28.860423] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 [2024-07-12 19:20:28.872366] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 Malloc1 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1512622 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1512622 /var/tmp/bdevperf.sock 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1512622 ']' 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:22.919 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.920 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:22.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:22.920 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.920 19:20:28 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 NVMe0n1 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.860 1 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 request: 00:24:23.860 { 00:24:23.860 "name": "NVMe0", 00:24:23.860 "trtype": "tcp", 00:24:23.860 "traddr": "10.0.0.2", 00:24:23.860 "adrfam": "ipv4", 00:24:23.860 "trsvcid": "4420", 00:24:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.860 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:23.860 "hostaddr": "10.0.0.2", 00:24:23.860 "hostsvcid": "60000", 00:24:23.860 "prchk_reftag": false, 00:24:23.860 "prchk_guard": false, 00:24:23.860 "hdgst": false, 00:24:23.860 "ddgst": false, 00:24:23.860 "method": "bdev_nvme_attach_controller", 00:24:23.860 "req_id": 1 00:24:23.860 } 00:24:23.860 Got JSON-RPC error response 00:24:23.860 response: 00:24:23.860 { 00:24:23.860 "code": -114, 00:24:23.860 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:23.860 } 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 request: 00:24:23.860 { 00:24:23.860 "name": "NVMe0", 00:24:23.860 "trtype": "tcp", 00:24:23.860 "traddr": "10.0.0.2", 00:24:23.860 "adrfam": "ipv4", 00:24:23.860 "trsvcid": "4420", 00:24:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:23.860 "hostaddr": "10.0.0.2", 00:24:23.860 "hostsvcid": "60000", 00:24:23.860 "prchk_reftag": false, 00:24:23.860 "prchk_guard": false, 00:24:23.860 "hdgst": false, 00:24:23.860 "ddgst": false, 00:24:23.860 "method": "bdev_nvme_attach_controller", 00:24:23.860 "req_id": 1 00:24:23.860 } 00:24:23.860 Got JSON-RPC error response 00:24:23.860 response: 00:24:23.860 { 00:24:23.860 "code": -114, 00:24:23.860 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:23.860 } 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 request: 00:24:23.860 { 00:24:23.860 "name": "NVMe0", 00:24:23.860 "trtype": "tcp", 00:24:23.860 "traddr": "10.0.0.2", 00:24:23.860 "adrfam": "ipv4", 00:24:23.860 "trsvcid": "4420", 00:24:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.860 "hostaddr": "10.0.0.2", 00:24:23.860 "hostsvcid": "60000", 00:24:23.860 "prchk_reftag": false, 00:24:23.860 "prchk_guard": false, 00:24:23.860 "hdgst": false, 00:24:23.860 "ddgst": false, 00:24:23.860 "multipath": "disable", 00:24:23.860 "method": "bdev_nvme_attach_controller", 00:24:23.860 "req_id": 1 00:24:23.860 } 00:24:23.860 Got JSON-RPC error response 00:24:23.860 response: 00:24:23.860 { 00:24:23.860 "code": -114, 00:24:23.860 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:23.860 } 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:23.860 request: 00:24:23.860 { 00:24:23.860 "name": "NVMe0", 00:24:23.860 "trtype": "tcp", 00:24:23.860 "traddr": "10.0.0.2", 00:24:23.860 "adrfam": "ipv4", 00:24:23.860 "trsvcid": "4420", 00:24:23.860 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:23.860 "hostaddr": "10.0.0.2", 00:24:23.860 "hostsvcid": "60000", 00:24:23.860 "prchk_reftag": false, 00:24:23.860 "prchk_guard": false, 00:24:23.860 "hdgst": false, 00:24:23.860 "ddgst": false, 00:24:23.860 "multipath": "failover", 00:24:23.860 "method": "bdev_nvme_attach_controller", 00:24:23.860 "req_id": 1 00:24:23.860 } 00:24:23.860 Got JSON-RPC error response 00:24:23.860 response: 00:24:23.860 { 00:24:23.860 "code": -114, 00:24:23.860 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:23.860 } 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.860 19:20:29 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.120 00:24:24.120 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.120 19:20:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:24.120 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.120 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.120 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.120 19:20:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:24.120 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.120 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.381 00:24:24.381 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.381 19:20:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.381 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:24.381 19:20:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:24.381 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:24.381 19:20:30 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:24.381 19:20:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:24.381 19:20:30 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:25.321 0 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1512622 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1512622 ']' 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1512622 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.321 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1512622 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1512622' 00:24:25.581 killing process with pid 1512622 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1512622 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1512622 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.581 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:25.582 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:25.582 [2024-07-12 19:20:28.993431] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:25.582 [2024-07-12 19:20:28.993488] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512622 ] 00:24:25.582 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.582 [2024-07-12 19:20:29.052462] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.582 [2024-07-12 19:20:29.116706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:25.582 [2024-07-12 19:20:30.251070] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 266d29c8-2594-4629-bf0a-0e9af8bcb7dd already exists 00:24:25.582 [2024-07-12 19:20:30.251102] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:266d29c8-2594-4629-bf0a-0e9af8bcb7dd alias for bdev NVMe1n1 00:24:25.582 [2024-07-12 19:20:30.251110] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:25.582 Running I/O for 1 seconds... 00:24:25.582 00:24:25.582 Latency(us) 00:24:25.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.582 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:25.582 NVMe0n1 : 1.00 22013.79 85.99 0.00 0.00 5801.87 3932.16 13817.17 00:24:25.582 =================================================================================================================== 00:24:25.582 Total : 22013.79 85.99 0.00 0.00 5801.87 3932.16 13817.17 00:24:25.582 Received shutdown signal, test time was about 1.000000 seconds 00:24:25.582 00:24:25.582 Latency(us) 00:24:25.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.582 =================================================================================================================== 00:24:25.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.582 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:25.582 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:25.582 rmmod nvme_tcp 00:24:25.582 rmmod nvme_fabrics 00:24:25.582 rmmod nvme_keyring 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1512367 ']' 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1512367 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1512367 ']' 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1512367 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1512367 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1512367' 00:24:25.842 killing process with pid 1512367 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1512367 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1512367 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.842 19:20:31 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.390 19:20:33 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:28.390 00:24:28.390 real 0m13.257s 00:24:28.390 user 0m16.305s 00:24:28.390 sys 0m5.900s 00:24:28.390 19:20:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:28.390 19:20:33 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:28.390 ************************************ 00:24:28.390 END TEST nvmf_multicontroller 00:24:28.390 ************************************ 00:24:28.390 19:20:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:28.390 19:20:34 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:28.390 19:20:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:28.390 19:20:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:28.390 19:20:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:28.390 ************************************ 00:24:28.390 START TEST nvmf_aer 00:24:28.390 ************************************ 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:28.390 * Looking for test storage... 00:24:28.390 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:28.390 19:20:34 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.979 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:34.980 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:34.980 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:34.980 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:34.980 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:34.980 19:20:40 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:34.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:34.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.555 ms 00:24:34.980 00:24:34.980 --- 10.0.0.2 ping statistics --- 00:24:34.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.980 rtt min/avg/max/mdev = 0.555/0.555/0.555/0.000 ms 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:34.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:34.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:24:34.980 00:24:34.980 --- 10.0.0.1 ping statistics --- 00:24:34.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:34.980 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1517294 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1517294 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1517294 ']' 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:34.980 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:34.980 [2024-07-12 19:20:41.107636] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:34.980 [2024-07-12 19:20:41.107688] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.241 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.241 [2024-07-12 19:20:41.173939] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:35.241 [2024-07-12 19:20:41.244005] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.241 [2024-07-12 19:20:41.244041] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.241 [2024-07-12 19:20:41.244049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.241 [2024-07-12 19:20:41.244055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.241 [2024-07-12 19:20:41.244060] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.241 [2024-07-12 19:20:41.244200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.241 [2024-07-12 19:20:41.244464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.241 [2024-07-12 19:20:41.244621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:35.241 [2024-07-12 19:20:41.244621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:35.812 [2024-07-12 19:20:41.925737] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:35.812 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.073 Malloc0 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.073 [2024-07-12 19:20:41.985103] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.073 19:20:41 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.073 [ 00:24:36.073 { 00:24:36.073 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:36.073 "subtype": "Discovery", 00:24:36.074 "listen_addresses": [], 00:24:36.074 "allow_any_host": true, 00:24:36.074 "hosts": [] 00:24:36.074 }, 00:24:36.074 { 00:24:36.074 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.074 "subtype": "NVMe", 00:24:36.074 "listen_addresses": [ 00:24:36.074 { 00:24:36.074 "trtype": "TCP", 00:24:36.074 "adrfam": "IPv4", 00:24:36.074 "traddr": "10.0.0.2", 00:24:36.074 "trsvcid": "4420" 00:24:36.074 } 00:24:36.074 ], 00:24:36.074 "allow_any_host": true, 00:24:36.074 "hosts": [], 00:24:36.074 "serial_number": "SPDK00000000000001", 00:24:36.074 "model_number": "SPDK bdev Controller", 00:24:36.074 "max_namespaces": 2, 00:24:36.074 "min_cntlid": 1, 00:24:36.074 "max_cntlid": 65519, 00:24:36.074 "namespaces": [ 00:24:36.074 { 00:24:36.074 "nsid": 1, 00:24:36.074 "bdev_name": "Malloc0", 00:24:36.074 "name": "Malloc0", 00:24:36.074 "nguid": "75A52D87878F48ACA1616BFAB609749A", 00:24:36.074 "uuid": "75a52d87-878f-48ac-a161-6bfab609749a" 00:24:36.074 } 00:24:36.074 ] 00:24:36.074 } 00:24:36.074 ] 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1517496 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:36.074 EAL: No free 2048 kB hugepages reported on node 1 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:36.074 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:36.334 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:36.334 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:36.334 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.335 Malloc1 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.335 [ 00:24:36.335 { 00:24:36.335 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:36.335 "subtype": "Discovery", 00:24:36.335 "listen_addresses": [], 00:24:36.335 "allow_any_host": true, 00:24:36.335 "hosts": [] 00:24:36.335 }, 00:24:36.335 { 00:24:36.335 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.335 "subtype": "NVMe", 00:24:36.335 "listen_addresses": [ 00:24:36.335 { 00:24:36.335 "trtype": "TCP", 00:24:36.335 "adrfam": "IPv4", 00:24:36.335 "traddr": "10.0.0.2", 00:24:36.335 "trsvcid": "4420" 00:24:36.335 } 00:24:36.335 ], 00:24:36.335 "allow_any_host": true, 00:24:36.335 "hosts": [], 00:24:36.335 "serial_number": "SPDK00000000000001", 00:24:36.335 "model_number": "SPDK bdev Controller", 00:24:36.335 "max_namespaces": 2, 00:24:36.335 "min_cntlid": 1, 00:24:36.335 "max_cntlid": 65519, 00:24:36.335 "namespaces": [ 00:24:36.335 { 00:24:36.335 "nsid": 1, 00:24:36.335 "bdev_name": "Malloc0", 00:24:36.335 "name": "Malloc0", 00:24:36.335 "nguid": "75A52D87878F48ACA1616BFAB609749A", 00:24:36.335 "uuid": "75a52d87-878f-48ac-a161-6bfab609749a" 00:24:36.335 }, 00:24:36.335 { 00:24:36.335 "nsid": 2, 00:24:36.335 "bdev_name": "Malloc1", 00:24:36.335 "name": "Malloc1", 00:24:36.335 "nguid": "8DAB24E94F0947B6947BCD3E45037DAF", 00:24:36.335 Asynchronous Event Request test 00:24:36.335 Attaching to 10.0.0.2 00:24:36.335 Attached to 10.0.0.2 00:24:36.335 Registering asynchronous event callbacks... 00:24:36.335 Starting namespace attribute notice tests for all controllers... 00:24:36.335 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:36.335 aer_cb - Changed Namespace 00:24:36.335 Cleaning up... 00:24:36.335 "uuid": "8dab24e9-4f09-47b6-947b-cd3e45037daf" 00:24:36.335 } 00:24:36.335 ] 00:24:36.335 } 00:24:36.335 ] 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1517496 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:36.335 rmmod nvme_tcp 00:24:36.335 rmmod nvme_fabrics 00:24:36.335 rmmod nvme_keyring 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1517294 ']' 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1517294 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1517294 ']' 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1517294 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:36.335 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1517294 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1517294' 00:24:36.599 killing process with pid 1517294 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1517294 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1517294 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:36.599 19:20:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.559 19:20:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:38.559 00:24:38.559 real 0m10.608s 00:24:38.559 user 0m7.345s 00:24:38.559 sys 0m5.579s 00:24:38.559 19:20:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:38.559 19:20:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:38.559 ************************************ 00:24:38.559 END TEST nvmf_aer 00:24:38.559 ************************************ 00:24:38.820 19:20:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:38.820 19:20:44 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:38.820 19:20:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:38.820 19:20:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:38.820 19:20:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:38.820 ************************************ 00:24:38.820 START TEST nvmf_async_init 00:24:38.820 ************************************ 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:38.820 * Looking for test storage... 00:24:38.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:38.820 19:20:44 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=0ce6915cffa0484b8930f03e892bc789 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:38.821 19:20:44 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:45.407 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:45.407 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:45.407 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:45.407 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:45.407 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:45.408 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:45.668 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:45.668 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:45.668 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:45.668 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:45.668 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:45.668 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:45.668 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:45.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:45.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.746 ms 00:24:45.668 00:24:45.668 --- 10.0.0.2 ping statistics --- 00:24:45.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.668 rtt min/avg/max/mdev = 0.746/0.746/0.746/0.000 ms 00:24:45.668 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:45.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:45.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:24:45.669 00:24:45.669 --- 10.0.0.1 ping statistics --- 00:24:45.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:45.669 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1521647 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1521647 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1521647 ']' 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:45.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:45.669 19:20:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:45.669 [2024-07-12 19:20:51.785132] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:45.669 [2024-07-12 19:20:51.785181] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:45.929 EAL: No free 2048 kB hugepages reported on node 1 00:24:45.929 [2024-07-12 19:20:51.849172] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.929 [2024-07-12 19:20:51.913287] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:45.929 [2024-07-12 19:20:51.913323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:45.929 [2024-07-12 19:20:51.913333] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:45.929 [2024-07-12 19:20:51.913340] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:45.929 [2024-07-12 19:20:51.913345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:45.929 [2024-07-12 19:20:51.913363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.499 [2024-07-12 19:20:52.567480] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.499 null0 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 0ce6915cffa0484b8930f03e892bc789 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.499 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.499 [2024-07-12 19:20:52.627740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.759 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.759 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:46.759 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.759 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.759 nvme0n1 00:24:46.759 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.759 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:46.759 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.759 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:46.759 [ 00:24:46.759 { 00:24:46.759 "name": "nvme0n1", 00:24:46.759 "aliases": [ 00:24:46.759 "0ce6915c-ffa0-484b-8930-f03e892bc789" 00:24:46.759 ], 00:24:46.759 "product_name": "NVMe disk", 00:24:46.759 "block_size": 512, 00:24:46.759 "num_blocks": 2097152, 00:24:46.759 "uuid": "0ce6915c-ffa0-484b-8930-f03e892bc789", 00:24:46.759 "assigned_rate_limits": { 00:24:46.759 "rw_ios_per_sec": 0, 00:24:46.759 "rw_mbytes_per_sec": 0, 00:24:46.759 "r_mbytes_per_sec": 0, 00:24:46.760 "w_mbytes_per_sec": 0 00:24:46.760 }, 00:24:46.760 "claimed": false, 00:24:46.760 "zoned": false, 00:24:46.760 "supported_io_types": { 00:24:46.760 "read": true, 00:24:46.760 "write": true, 00:24:46.760 "unmap": false, 00:24:46.760 "flush": true, 00:24:46.760 "reset": true, 00:24:46.760 "nvme_admin": true, 00:24:46.760 "nvme_io": true, 00:24:46.760 "nvme_io_md": false, 00:24:46.760 "write_zeroes": true, 00:24:46.760 "zcopy": false, 00:24:46.760 "get_zone_info": false, 00:24:47.019 "zone_management": false, 00:24:47.019 "zone_append": false, 00:24:47.019 "compare": true, 00:24:47.019 "compare_and_write": true, 00:24:47.019 "abort": true, 00:24:47.019 "seek_hole": false, 00:24:47.019 "seek_data": false, 00:24:47.019 "copy": true, 00:24:47.019 "nvme_iov_md": false 00:24:47.019 }, 00:24:47.019 "memory_domains": [ 00:24:47.019 { 00:24:47.019 "dma_device_id": "system", 00:24:47.019 "dma_device_type": 1 00:24:47.019 } 00:24:47.019 ], 00:24:47.019 "driver_specific": { 00:24:47.019 "nvme": [ 00:24:47.019 { 00:24:47.019 "trid": { 00:24:47.019 "trtype": "TCP", 00:24:47.019 "adrfam": "IPv4", 00:24:47.019 "traddr": "10.0.0.2", 00:24:47.019 "trsvcid": "4420", 00:24:47.019 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:47.019 }, 00:24:47.019 "ctrlr_data": { 00:24:47.019 "cntlid": 1, 00:24:47.019 "vendor_id": "0x8086", 00:24:47.019 "model_number": "SPDK bdev Controller", 00:24:47.019 "serial_number": "00000000000000000000", 00:24:47.019 "firmware_revision": "24.09", 00:24:47.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.019 "oacs": { 00:24:47.019 "security": 0, 00:24:47.019 "format": 0, 00:24:47.019 "firmware": 0, 00:24:47.019 "ns_manage": 0 00:24:47.019 }, 00:24:47.019 "multi_ctrlr": true, 00:24:47.019 "ana_reporting": false 00:24:47.019 }, 00:24:47.019 "vs": { 00:24:47.019 "nvme_version": "1.3" 00:24:47.019 }, 00:24:47.019 "ns_data": { 00:24:47.019 "id": 1, 00:24:47.019 "can_share": true 00:24:47.019 } 00:24:47.019 } 00:24:47.019 ], 00:24:47.019 "mp_policy": "active_passive" 00:24:47.019 } 00:24:47.019 } 00:24:47.019 ] 00:24:47.019 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.019 19:20:52 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:47.019 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.019 19:20:52 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.019 [2024-07-12 19:20:52.904312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:47.019 [2024-07-12 19:20:52.904373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc4adf0 (9): Bad file descriptor 00:24:47.019 [2024-07-12 19:20:53.036219] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.019 [ 00:24:47.019 { 00:24:47.019 "name": "nvme0n1", 00:24:47.019 "aliases": [ 00:24:47.019 "0ce6915c-ffa0-484b-8930-f03e892bc789" 00:24:47.019 ], 00:24:47.019 "product_name": "NVMe disk", 00:24:47.019 "block_size": 512, 00:24:47.019 "num_blocks": 2097152, 00:24:47.019 "uuid": "0ce6915c-ffa0-484b-8930-f03e892bc789", 00:24:47.019 "assigned_rate_limits": { 00:24:47.019 "rw_ios_per_sec": 0, 00:24:47.019 "rw_mbytes_per_sec": 0, 00:24:47.019 "r_mbytes_per_sec": 0, 00:24:47.019 "w_mbytes_per_sec": 0 00:24:47.019 }, 00:24:47.019 "claimed": false, 00:24:47.019 "zoned": false, 00:24:47.019 "supported_io_types": { 00:24:47.019 "read": true, 00:24:47.019 "write": true, 00:24:47.019 "unmap": false, 00:24:47.019 "flush": true, 00:24:47.019 "reset": true, 00:24:47.019 "nvme_admin": true, 00:24:47.019 "nvme_io": true, 00:24:47.019 "nvme_io_md": false, 00:24:47.019 "write_zeroes": true, 00:24:47.019 "zcopy": false, 00:24:47.019 "get_zone_info": false, 00:24:47.019 "zone_management": false, 00:24:47.019 "zone_append": false, 00:24:47.019 "compare": true, 00:24:47.019 "compare_and_write": true, 00:24:47.019 "abort": true, 00:24:47.019 "seek_hole": false, 00:24:47.019 "seek_data": false, 00:24:47.019 "copy": true, 00:24:47.019 "nvme_iov_md": false 00:24:47.019 }, 00:24:47.019 "memory_domains": [ 00:24:47.019 { 00:24:47.019 "dma_device_id": "system", 00:24:47.019 "dma_device_type": 1 00:24:47.019 } 00:24:47.019 ], 00:24:47.019 "driver_specific": { 00:24:47.019 "nvme": [ 00:24:47.019 { 00:24:47.019 "trid": { 00:24:47.019 "trtype": "TCP", 00:24:47.019 "adrfam": "IPv4", 00:24:47.019 "traddr": "10.0.0.2", 00:24:47.019 "trsvcid": "4420", 00:24:47.019 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:47.019 }, 00:24:47.019 "ctrlr_data": { 00:24:47.019 "cntlid": 2, 00:24:47.019 "vendor_id": "0x8086", 00:24:47.019 "model_number": "SPDK bdev Controller", 00:24:47.019 "serial_number": "00000000000000000000", 00:24:47.019 "firmware_revision": "24.09", 00:24:47.019 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.019 "oacs": { 00:24:47.019 "security": 0, 00:24:47.019 "format": 0, 00:24:47.019 "firmware": 0, 00:24:47.019 "ns_manage": 0 00:24:47.019 }, 00:24:47.019 "multi_ctrlr": true, 00:24:47.019 "ana_reporting": false 00:24:47.019 }, 00:24:47.019 "vs": { 00:24:47.019 "nvme_version": "1.3" 00:24:47.019 }, 00:24:47.019 "ns_data": { 00:24:47.019 "id": 1, 00:24:47.019 "can_share": true 00:24:47.019 } 00:24:47.019 } 00:24:47.019 ], 00:24:47.019 "mp_policy": "active_passive" 00:24:47.019 } 00:24:47.019 } 00:24:47.019 ] 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2rCcHB1eD3 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2rCcHB1eD3 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.019 [2024-07-12 19:20:53.096934] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:47.019 [2024-07-12 19:20:53.097052] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2rCcHB1eD3 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.019 [2024-07-12 19:20:53.108952] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2rCcHB1eD3 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.019 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.019 [2024-07-12 19:20:53.121006] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:47.019 [2024-07-12 19:20:53.121043] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:47.279 nvme0n1 00:24:47.279 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.279 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:47.279 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.279 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.279 [ 00:24:47.279 { 00:24:47.279 "name": "nvme0n1", 00:24:47.279 "aliases": [ 00:24:47.280 "0ce6915c-ffa0-484b-8930-f03e892bc789" 00:24:47.280 ], 00:24:47.280 "product_name": "NVMe disk", 00:24:47.280 "block_size": 512, 00:24:47.280 "num_blocks": 2097152, 00:24:47.280 "uuid": "0ce6915c-ffa0-484b-8930-f03e892bc789", 00:24:47.280 "assigned_rate_limits": { 00:24:47.280 "rw_ios_per_sec": 0, 00:24:47.280 "rw_mbytes_per_sec": 0, 00:24:47.280 "r_mbytes_per_sec": 0, 00:24:47.280 "w_mbytes_per_sec": 0 00:24:47.280 }, 00:24:47.280 "claimed": false, 00:24:47.280 "zoned": false, 00:24:47.280 "supported_io_types": { 00:24:47.280 "read": true, 00:24:47.280 "write": true, 00:24:47.280 "unmap": false, 00:24:47.280 "flush": true, 00:24:47.280 "reset": true, 00:24:47.280 "nvme_admin": true, 00:24:47.280 "nvme_io": true, 00:24:47.280 "nvme_io_md": false, 00:24:47.280 "write_zeroes": true, 00:24:47.280 "zcopy": false, 00:24:47.280 "get_zone_info": false, 00:24:47.280 "zone_management": false, 00:24:47.280 "zone_append": false, 00:24:47.280 "compare": true, 00:24:47.280 "compare_and_write": true, 00:24:47.280 "abort": true, 00:24:47.280 "seek_hole": false, 00:24:47.280 "seek_data": false, 00:24:47.280 "copy": true, 00:24:47.280 "nvme_iov_md": false 00:24:47.280 }, 00:24:47.280 "memory_domains": [ 00:24:47.280 { 00:24:47.280 "dma_device_id": "system", 00:24:47.280 "dma_device_type": 1 00:24:47.280 } 00:24:47.280 ], 00:24:47.280 "driver_specific": { 00:24:47.280 "nvme": [ 00:24:47.280 { 00:24:47.280 "trid": { 00:24:47.280 "trtype": "TCP", 00:24:47.280 "adrfam": "IPv4", 00:24:47.280 "traddr": "10.0.0.2", 00:24:47.280 "trsvcid": "4421", 00:24:47.280 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:47.280 }, 00:24:47.280 "ctrlr_data": { 00:24:47.280 "cntlid": 3, 00:24:47.280 "vendor_id": "0x8086", 00:24:47.280 "model_number": "SPDK bdev Controller", 00:24:47.280 "serial_number": "00000000000000000000", 00:24:47.280 "firmware_revision": "24.09", 00:24:47.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:47.280 "oacs": { 00:24:47.280 "security": 0, 00:24:47.280 "format": 0, 00:24:47.280 "firmware": 0, 00:24:47.280 "ns_manage": 0 00:24:47.280 }, 00:24:47.280 "multi_ctrlr": true, 00:24:47.280 "ana_reporting": false 00:24:47.280 }, 00:24:47.280 "vs": { 00:24:47.280 "nvme_version": "1.3" 00:24:47.280 }, 00:24:47.280 "ns_data": { 00:24:47.280 "id": 1, 00:24:47.280 "can_share": true 00:24:47.280 } 00:24:47.280 } 00:24:47.280 ], 00:24:47.280 "mp_policy": "active_passive" 00:24:47.280 } 00:24:47.280 } 00:24:47.280 ] 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.2rCcHB1eD3 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:47.280 rmmod nvme_tcp 00:24:47.280 rmmod nvme_fabrics 00:24:47.280 rmmod nvme_keyring 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1521647 ']' 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1521647 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1521647 ']' 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1521647 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1521647 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1521647' 00:24:47.280 killing process with pid 1521647 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1521647 00:24:47.280 [2024-07-12 19:20:53.359838] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:47.280 [2024-07-12 19:20:53.359863] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:47.280 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1521647 00:24:47.541 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:47.541 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:47.541 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:47.541 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:47.541 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:47.541 19:20:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:47.541 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:47.541 19:20:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.451 19:20:55 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:49.451 00:24:49.451 real 0m10.783s 00:24:49.451 user 0m3.830s 00:24:49.451 sys 0m5.358s 00:24:49.451 19:20:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.451 19:20:55 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:49.451 ************************************ 00:24:49.451 END TEST nvmf_async_init 00:24:49.451 ************************************ 00:24:49.712 19:20:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:49.712 19:20:55 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:49.712 19:20:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:49.712 19:20:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.712 19:20:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.712 ************************************ 00:24:49.712 START TEST dma 00:24:49.712 ************************************ 00:24:49.712 19:20:55 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:49.712 * Looking for test storage... 00:24:49.712 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.712 19:20:55 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.712 19:20:55 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.712 19:20:55 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.712 19:20:55 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.712 19:20:55 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.712 19:20:55 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.712 19:20:55 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.712 19:20:55 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:49.712 19:20:55 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.712 19:20:55 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.712 19:20:55 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:49.712 19:20:55 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:49.712 00:24:49.712 real 0m0.136s 00:24:49.712 user 0m0.056s 00:24:49.712 sys 0m0.088s 00:24:49.712 19:20:55 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:49.712 19:20:55 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:49.712 ************************************ 00:24:49.712 END TEST dma 00:24:49.712 ************************************ 00:24:49.712 19:20:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:49.712 19:20:55 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:49.712 19:20:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:49.712 19:20:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:49.712 19:20:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:49.973 ************************************ 00:24:49.973 START TEST nvmf_identify 00:24:49.973 ************************************ 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:49.973 * Looking for test storage... 00:24:49.973 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:49.973 19:20:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:24:58.121 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:24:58.121 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:24:58.121 Found net devices under 0000:4b:00.0: cvl_0_0 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:24:58.121 Found net devices under 0000:4b:00.1: cvl_0_1 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:58.121 19:21:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:58.121 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:58.121 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.568 ms 00:24:58.121 00:24:58.121 --- 10.0.0.2 ping statistics --- 00:24:58.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.121 rtt min/avg/max/mdev = 0.568/0.568/0.568/0.000 ms 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:58.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:58.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:24:58.121 00:24:58.121 --- 10.0.0.1 ping statistics --- 00:24:58.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:58.121 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1526307 00:24:58.121 19:21:03 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:58.122 19:21:03 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:58.122 19:21:03 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1526307 00:24:58.122 19:21:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1526307 ']' 00:24:58.122 19:21:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.122 19:21:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:58.122 19:21:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.122 19:21:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:58.122 19:21:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 [2024-07-12 19:21:03.221170] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:58.122 [2024-07-12 19:21:03.221235] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.122 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.122 [2024-07-12 19:21:03.292148] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:58.122 [2024-07-12 19:21:03.368882] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.122 [2024-07-12 19:21:03.368920] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.122 [2024-07-12 19:21:03.368928] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.122 [2024-07-12 19:21:03.368935] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.122 [2024-07-12 19:21:03.368941] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.122 [2024-07-12 19:21:03.369080] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.122 [2024-07-12 19:21:03.369224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:58.122 [2024-07-12 19:21:03.369285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.122 [2024-07-12 19:21:03.369286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 [2024-07-12 19:21:04.012629] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 Malloc0 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 [2024-07-12 19:21:04.112119] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.122 [ 00:24:58.122 { 00:24:58.122 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:58.122 "subtype": "Discovery", 00:24:58.122 "listen_addresses": [ 00:24:58.122 { 00:24:58.122 "trtype": "TCP", 00:24:58.122 "adrfam": "IPv4", 00:24:58.122 "traddr": "10.0.0.2", 00:24:58.122 "trsvcid": "4420" 00:24:58.122 } 00:24:58.122 ], 00:24:58.122 "allow_any_host": true, 00:24:58.122 "hosts": [] 00:24:58.122 }, 00:24:58.122 { 00:24:58.122 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:58.122 "subtype": "NVMe", 00:24:58.122 "listen_addresses": [ 00:24:58.122 { 00:24:58.122 "trtype": "TCP", 00:24:58.122 "adrfam": "IPv4", 00:24:58.122 "traddr": "10.0.0.2", 00:24:58.122 "trsvcid": "4420" 00:24:58.122 } 00:24:58.122 ], 00:24:58.122 "allow_any_host": true, 00:24:58.122 "hosts": [], 00:24:58.122 "serial_number": "SPDK00000000000001", 00:24:58.122 "model_number": "SPDK bdev Controller", 00:24:58.122 "max_namespaces": 32, 00:24:58.122 "min_cntlid": 1, 00:24:58.122 "max_cntlid": 65519, 00:24:58.122 "namespaces": [ 00:24:58.122 { 00:24:58.122 "nsid": 1, 00:24:58.122 "bdev_name": "Malloc0", 00:24:58.122 "name": "Malloc0", 00:24:58.122 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:58.122 "eui64": "ABCDEF0123456789", 00:24:58.122 "uuid": "9020e99b-cdf5-4eb0-b615-b0d3146095b2" 00:24:58.122 } 00:24:58.122 ] 00:24:58.122 } 00:24:58.122 ] 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.122 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:58.122 [2024-07-12 19:21:04.173741] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:58.122 [2024-07-12 19:21:04.173783] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526496 ] 00:24:58.122 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.122 [2024-07-12 19:21:04.204772] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:58.122 [2024-07-12 19:21:04.204822] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:58.122 [2024-07-12 19:21:04.204828] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:58.122 [2024-07-12 19:21:04.204838] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:58.122 [2024-07-12 19:21:04.204845] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:58.122 [2024-07-12 19:21:04.208150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:58.122 [2024-07-12 19:21:04.208184] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10bbec0 0 00:24:58.122 [2024-07-12 19:21:04.216133] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:58.122 [2024-07-12 19:21:04.216144] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:58.122 [2024-07-12 19:21:04.216152] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:58.122 [2024-07-12 19:21:04.216155] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:58.122 [2024-07-12 19:21:04.216193] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.122 [2024-07-12 19:21:04.216199] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.122 [2024-07-12 19:21:04.216203] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.122 [2024-07-12 19:21:04.216217] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:58.122 [2024-07-12 19:21:04.216234] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.122 [2024-07-12 19:21:04.224132] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.122 [2024-07-12 19:21:04.224149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.122 [2024-07-12 19:21:04.224153] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.122 [2024-07-12 19:21:04.224157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.122 [2024-07-12 19:21:04.224167] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:58.122 [2024-07-12 19:21:04.224174] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:58.122 [2024-07-12 19:21:04.224179] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:58.122 [2024-07-12 19:21:04.224192] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.122 [2024-07-12 19:21:04.224196] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.122 [2024-07-12 19:21:04.224200] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.122 [2024-07-12 19:21:04.224208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.122 [2024-07-12 19:21:04.224221] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.122 [2024-07-12 19:21:04.224440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.122 [2024-07-12 19:21:04.224447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.122 [2024-07-12 19:21:04.224450] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.122 [2024-07-12 19:21:04.224454] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.122 [2024-07-12 19:21:04.224459] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:58.122 [2024-07-12 19:21:04.224466] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:58.122 [2024-07-12 19:21:04.224473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.224476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.224480] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.224487] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.123 [2024-07-12 19:21:04.224498] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.123 [2024-07-12 19:21:04.224714] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.123 [2024-07-12 19:21:04.224721] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.123 [2024-07-12 19:21:04.224724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.224728] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.123 [2024-07-12 19:21:04.224733] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:58.123 [2024-07-12 19:21:04.224744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:58.123 [2024-07-12 19:21:04.224750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.224754] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.224758] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.224764] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.123 [2024-07-12 19:21:04.224774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.123 [2024-07-12 19:21:04.224988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.123 [2024-07-12 19:21:04.224995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.123 [2024-07-12 19:21:04.224998] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225002] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.123 [2024-07-12 19:21:04.225007] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:58.123 [2024-07-12 19:21:04.225016] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225020] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.225030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.123 [2024-07-12 19:21:04.225039] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.123 [2024-07-12 19:21:04.225219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.123 [2024-07-12 19:21:04.225226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.123 [2024-07-12 19:21:04.225229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.123 [2024-07-12 19:21:04.225238] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:58.123 [2024-07-12 19:21:04.225243] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:58.123 [2024-07-12 19:21:04.225250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:58.123 [2024-07-12 19:21:04.225355] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:58.123 [2024-07-12 19:21:04.225360] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:58.123 [2024-07-12 19:21:04.225368] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225372] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.225382] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.123 [2024-07-12 19:21:04.225393] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.123 [2024-07-12 19:21:04.225615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.123 [2024-07-12 19:21:04.225621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.123 [2024-07-12 19:21:04.225625] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225631] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.123 [2024-07-12 19:21:04.225636] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:58.123 [2024-07-12 19:21:04.225645] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225649] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.225659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.123 [2024-07-12 19:21:04.225669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.123 [2024-07-12 19:21:04.225880] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.123 [2024-07-12 19:21:04.225887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.123 [2024-07-12 19:21:04.225890] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225894] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.123 [2024-07-12 19:21:04.225898] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:58.123 [2024-07-12 19:21:04.225902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:58.123 [2024-07-12 19:21:04.225910] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:58.123 [2024-07-12 19:21:04.225918] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:58.123 [2024-07-12 19:21:04.225926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.225930] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.225937] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.123 [2024-07-12 19:21:04.225947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.123 [2024-07-12 19:21:04.226200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.123 [2024-07-12 19:21:04.226206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.123 [2024-07-12 19:21:04.226210] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226214] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bbec0): datao=0, datal=4096, cccid=0 00:24:58.123 [2024-07-12 19:21:04.226219] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x113ee40) on tqpair(0x10bbec0): expected_datao=0, payload_size=4096 00:24:58.123 [2024-07-12 19:21:04.226223] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226265] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226270] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.123 [2024-07-12 19:21:04.226475] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.123 [2024-07-12 19:21:04.226478] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226482] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.123 [2024-07-12 19:21:04.226489] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:58.123 [2024-07-12 19:21:04.226497] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:58.123 [2024-07-12 19:21:04.226503] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:58.123 [2024-07-12 19:21:04.226508] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:58.123 [2024-07-12 19:21:04.226513] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:58.123 [2024-07-12 19:21:04.226517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:58.123 [2024-07-12 19:21:04.226525] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:58.123 [2024-07-12 19:21:04.226532] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226536] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226540] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.226547] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:58.123 [2024-07-12 19:21:04.226558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.123 [2024-07-12 19:21:04.226781] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.123 [2024-07-12 19:21:04.226787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.123 [2024-07-12 19:21:04.226790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226794] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.123 [2024-07-12 19:21:04.226802] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226806] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226809] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.226815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.123 [2024-07-12 19:21:04.226821] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226825] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226828] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.226834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.123 [2024-07-12 19:21:04.226840] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226843] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.226852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.123 [2024-07-12 19:21:04.226858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.123 [2024-07-12 19:21:04.226865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bbec0) 00:24:58.123 [2024-07-12 19:21:04.226871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.124 [2024-07-12 19:21:04.226876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:58.124 [2024-07-12 19:21:04.226885] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:58.124 [2024-07-12 19:21:04.226894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.124 [2024-07-12 19:21:04.226897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bbec0) 00:24:58.124 [2024-07-12 19:21:04.226904] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.124 [2024-07-12 19:21:04.226916] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113ee40, cid 0, qid 0 00:24:58.124 [2024-07-12 19:21:04.226921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113efc0, cid 1, qid 0 00:24:58.124 [2024-07-12 19:21:04.226926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f140, cid 2, qid 0 00:24:58.124 [2024-07-12 19:21:04.226931] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f2c0, cid 3, qid 0 00:24:58.124 [2024-07-12 19:21:04.226935] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f440, cid 4, qid 0 00:24:58.124 [2024-07-12 19:21:04.227188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.124 [2024-07-12 19:21:04.227194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.124 [2024-07-12 19:21:04.227198] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.124 [2024-07-12 19:21:04.227202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f440) on tqpair=0x10bbec0 00:24:58.124 [2024-07-12 19:21:04.227207] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:58.124 [2024-07-12 19:21:04.227212] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:58.124 [2024-07-12 19:21:04.227222] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.124 [2024-07-12 19:21:04.227226] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bbec0) 00:24:58.124 [2024-07-12 19:21:04.227233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.124 [2024-07-12 19:21:04.227243] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f440, cid 4, qid 0 00:24:58.124 [2024-07-12 19:21:04.227439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.124 [2024-07-12 19:21:04.227446] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.124 [2024-07-12 19:21:04.227449] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.124 [2024-07-12 19:21:04.227452] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bbec0): datao=0, datal=4096, cccid=4 00:24:58.124 [2024-07-12 19:21:04.227457] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x113f440) on tqpair(0x10bbec0): expected_datao=0, payload_size=4096 00:24:58.124 [2024-07-12 19:21:04.227461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.124 [2024-07-12 19:21:04.227497] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.124 [2024-07-12 19:21:04.227501] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.387 [2024-07-12 19:21:04.271140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.387 [2024-07-12 19:21:04.271143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f440) on tqpair=0x10bbec0 00:24:58.387 [2024-07-12 19:21:04.271160] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:58.387 [2024-07-12 19:21:04.271184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271188] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bbec0) 00:24:58.387 [2024-07-12 19:21:04.271195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.387 [2024-07-12 19:21:04.271205] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271209] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271212] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10bbec0) 00:24:58.387 [2024-07-12 19:21:04.271218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.387 [2024-07-12 19:21:04.271233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f440, cid 4, qid 0 00:24:58.387 [2024-07-12 19:21:04.271238] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f5c0, cid 5, qid 0 00:24:58.387 [2024-07-12 19:21:04.271494] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.387 [2024-07-12 19:21:04.271500] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.387 [2024-07-12 19:21:04.271504] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271507] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bbec0): datao=0, datal=1024, cccid=4 00:24:58.387 [2024-07-12 19:21:04.271512] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x113f440) on tqpair(0x10bbec0): expected_datao=0, payload_size=1024 00:24:58.387 [2024-07-12 19:21:04.271516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271522] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271526] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271532] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.387 [2024-07-12 19:21:04.271537] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.387 [2024-07-12 19:21:04.271541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.271544] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f5c0) on tqpair=0x10bbec0 00:24:58.387 [2024-07-12 19:21:04.312314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.387 [2024-07-12 19:21:04.312323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.387 [2024-07-12 19:21:04.312326] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.312330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f440) on tqpair=0x10bbec0 00:24:58.387 [2024-07-12 19:21:04.312347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.387 [2024-07-12 19:21:04.312352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bbec0) 00:24:58.387 [2024-07-12 19:21:04.312358] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.387 [2024-07-12 19:21:04.312373] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f440, cid 4, qid 0 00:24:58.387 [2024-07-12 19:21:04.312575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.387 [2024-07-12 19:21:04.312581] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.387 [2024-07-12 19:21:04.312585] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.312588] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bbec0): datao=0, datal=3072, cccid=4 00:24:58.388 [2024-07-12 19:21:04.312593] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x113f440) on tqpair(0x10bbec0): expected_datao=0, payload_size=3072 00:24:58.388 [2024-07-12 19:21:04.312597] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.312636] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.312640] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.312831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.388 [2024-07-12 19:21:04.312837] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.388 [2024-07-12 19:21:04.312840] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.312844] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f440) on tqpair=0x10bbec0 00:24:58.388 [2024-07-12 19:21:04.312855] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.312859] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10bbec0) 00:24:58.388 [2024-07-12 19:21:04.312865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.388 [2024-07-12 19:21:04.312879] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f440, cid 4, qid 0 00:24:58.388 [2024-07-12 19:21:04.313119] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.388 [2024-07-12 19:21:04.313130] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.388 [2024-07-12 19:21:04.313133] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.313137] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10bbec0): datao=0, datal=8, cccid=4 00:24:58.388 [2024-07-12 19:21:04.313141] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x113f440) on tqpair(0x10bbec0): expected_datao=0, payload_size=8 00:24:58.388 [2024-07-12 19:21:04.313145] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.313152] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.313155] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.354313] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.388 [2024-07-12 19:21:04.354323] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.388 [2024-07-12 19:21:04.354327] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.388 [2024-07-12 19:21:04.354330] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f440) on tqpair=0x10bbec0 00:24:58.388 ===================================================== 00:24:58.388 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:58.388 ===================================================== 00:24:58.388 Controller Capabilities/Features 00:24:58.388 ================================ 00:24:58.388 Vendor ID: 0000 00:24:58.388 Subsystem Vendor ID: 0000 00:24:58.388 Serial Number: .................... 00:24:58.388 Model Number: ........................................ 00:24:58.388 Firmware Version: 24.09 00:24:58.388 Recommended Arb Burst: 0 00:24:58.388 IEEE OUI Identifier: 00 00 00 00:24:58.388 Multi-path I/O 00:24:58.388 May have multiple subsystem ports: No 00:24:58.388 May have multiple controllers: No 00:24:58.388 Associated with SR-IOV VF: No 00:24:58.388 Max Data Transfer Size: 131072 00:24:58.388 Max Number of Namespaces: 0 00:24:58.388 Max Number of I/O Queues: 1024 00:24:58.388 NVMe Specification Version (VS): 1.3 00:24:58.388 NVMe Specification Version (Identify): 1.3 00:24:58.388 Maximum Queue Entries: 128 00:24:58.388 Contiguous Queues Required: Yes 00:24:58.388 Arbitration Mechanisms Supported 00:24:58.388 Weighted Round Robin: Not Supported 00:24:58.388 Vendor Specific: Not Supported 00:24:58.388 Reset Timeout: 15000 ms 00:24:58.388 Doorbell Stride: 4 bytes 00:24:58.388 NVM Subsystem Reset: Not Supported 00:24:58.388 Command Sets Supported 00:24:58.388 NVM Command Set: Supported 00:24:58.388 Boot Partition: Not Supported 00:24:58.388 Memory Page Size Minimum: 4096 bytes 00:24:58.388 Memory Page Size Maximum: 4096 bytes 00:24:58.388 Persistent Memory Region: Not Supported 00:24:58.388 Optional Asynchronous Events Supported 00:24:58.388 Namespace Attribute Notices: Not Supported 00:24:58.388 Firmware Activation Notices: Not Supported 00:24:58.388 ANA Change Notices: Not Supported 00:24:58.388 PLE Aggregate Log Change Notices: Not Supported 00:24:58.388 LBA Status Info Alert Notices: Not Supported 00:24:58.388 EGE Aggregate Log Change Notices: Not Supported 00:24:58.388 Normal NVM Subsystem Shutdown event: Not Supported 00:24:58.388 Zone Descriptor Change Notices: Not Supported 00:24:58.388 Discovery Log Change Notices: Supported 00:24:58.388 Controller Attributes 00:24:58.388 128-bit Host Identifier: Not Supported 00:24:58.388 Non-Operational Permissive Mode: Not Supported 00:24:58.388 NVM Sets: Not Supported 00:24:58.388 Read Recovery Levels: Not Supported 00:24:58.388 Endurance Groups: Not Supported 00:24:58.388 Predictable Latency Mode: Not Supported 00:24:58.388 Traffic Based Keep ALive: Not Supported 00:24:58.388 Namespace Granularity: Not Supported 00:24:58.388 SQ Associations: Not Supported 00:24:58.388 UUID List: Not Supported 00:24:58.388 Multi-Domain Subsystem: Not Supported 00:24:58.388 Fixed Capacity Management: Not Supported 00:24:58.388 Variable Capacity Management: Not Supported 00:24:58.388 Delete Endurance Group: Not Supported 00:24:58.388 Delete NVM Set: Not Supported 00:24:58.388 Extended LBA Formats Supported: Not Supported 00:24:58.388 Flexible Data Placement Supported: Not Supported 00:24:58.388 00:24:58.388 Controller Memory Buffer Support 00:24:58.388 ================================ 00:24:58.388 Supported: No 00:24:58.388 00:24:58.388 Persistent Memory Region Support 00:24:58.388 ================================ 00:24:58.388 Supported: No 00:24:58.388 00:24:58.388 Admin Command Set Attributes 00:24:58.388 ============================ 00:24:58.388 Security Send/Receive: Not Supported 00:24:58.388 Format NVM: Not Supported 00:24:58.388 Firmware Activate/Download: Not Supported 00:24:58.388 Namespace Management: Not Supported 00:24:58.388 Device Self-Test: Not Supported 00:24:58.388 Directives: Not Supported 00:24:58.388 NVMe-MI: Not Supported 00:24:58.388 Virtualization Management: Not Supported 00:24:58.388 Doorbell Buffer Config: Not Supported 00:24:58.388 Get LBA Status Capability: Not Supported 00:24:58.388 Command & Feature Lockdown Capability: Not Supported 00:24:58.388 Abort Command Limit: 1 00:24:58.388 Async Event Request Limit: 4 00:24:58.388 Number of Firmware Slots: N/A 00:24:58.388 Firmware Slot 1 Read-Only: N/A 00:24:58.388 Firmware Activation Without Reset: N/A 00:24:58.388 Multiple Update Detection Support: N/A 00:24:58.388 Firmware Update Granularity: No Information Provided 00:24:58.388 Per-Namespace SMART Log: No 00:24:58.388 Asymmetric Namespace Access Log Page: Not Supported 00:24:58.388 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:58.388 Command Effects Log Page: Not Supported 00:24:58.388 Get Log Page Extended Data: Supported 00:24:58.388 Telemetry Log Pages: Not Supported 00:24:58.388 Persistent Event Log Pages: Not Supported 00:24:58.388 Supported Log Pages Log Page: May Support 00:24:58.388 Commands Supported & Effects Log Page: Not Supported 00:24:58.388 Feature Identifiers & Effects Log Page:May Support 00:24:58.388 NVMe-MI Commands & Effects Log Page: May Support 00:24:58.388 Data Area 4 for Telemetry Log: Not Supported 00:24:58.388 Error Log Page Entries Supported: 128 00:24:58.388 Keep Alive: Not Supported 00:24:58.388 00:24:58.388 NVM Command Set Attributes 00:24:58.388 ========================== 00:24:58.388 Submission Queue Entry Size 00:24:58.388 Max: 1 00:24:58.388 Min: 1 00:24:58.388 Completion Queue Entry Size 00:24:58.388 Max: 1 00:24:58.388 Min: 1 00:24:58.388 Number of Namespaces: 0 00:24:58.388 Compare Command: Not Supported 00:24:58.388 Write Uncorrectable Command: Not Supported 00:24:58.388 Dataset Management Command: Not Supported 00:24:58.388 Write Zeroes Command: Not Supported 00:24:58.388 Set Features Save Field: Not Supported 00:24:58.388 Reservations: Not Supported 00:24:58.388 Timestamp: Not Supported 00:24:58.388 Copy: Not Supported 00:24:58.388 Volatile Write Cache: Not Present 00:24:58.388 Atomic Write Unit (Normal): 1 00:24:58.388 Atomic Write Unit (PFail): 1 00:24:58.388 Atomic Compare & Write Unit: 1 00:24:58.388 Fused Compare & Write: Supported 00:24:58.388 Scatter-Gather List 00:24:58.388 SGL Command Set: Supported 00:24:58.388 SGL Keyed: Supported 00:24:58.388 SGL Bit Bucket Descriptor: Not Supported 00:24:58.388 SGL Metadata Pointer: Not Supported 00:24:58.388 Oversized SGL: Not Supported 00:24:58.388 SGL Metadata Address: Not Supported 00:24:58.388 SGL Offset: Supported 00:24:58.388 Transport SGL Data Block: Not Supported 00:24:58.388 Replay Protected Memory Block: Not Supported 00:24:58.388 00:24:58.388 Firmware Slot Information 00:24:58.388 ========================= 00:24:58.388 Active slot: 0 00:24:58.388 00:24:58.388 00:24:58.388 Error Log 00:24:58.388 ========= 00:24:58.388 00:24:58.388 Active Namespaces 00:24:58.388 ================= 00:24:58.388 Discovery Log Page 00:24:58.388 ================== 00:24:58.388 Generation Counter: 2 00:24:58.388 Number of Records: 2 00:24:58.388 Record Format: 0 00:24:58.388 00:24:58.388 Discovery Log Entry 0 00:24:58.388 ---------------------- 00:24:58.388 Transport Type: 3 (TCP) 00:24:58.388 Address Family: 1 (IPv4) 00:24:58.388 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:58.388 Entry Flags: 00:24:58.388 Duplicate Returned Information: 1 00:24:58.389 Explicit Persistent Connection Support for Discovery: 1 00:24:58.389 Transport Requirements: 00:24:58.389 Secure Channel: Not Required 00:24:58.389 Port ID: 0 (0x0000) 00:24:58.389 Controller ID: 65535 (0xffff) 00:24:58.389 Admin Max SQ Size: 128 00:24:58.389 Transport Service Identifier: 4420 00:24:58.389 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:58.389 Transport Address: 10.0.0.2 00:24:58.389 Discovery Log Entry 1 00:24:58.389 ---------------------- 00:24:58.389 Transport Type: 3 (TCP) 00:24:58.389 Address Family: 1 (IPv4) 00:24:58.389 Subsystem Type: 2 (NVM Subsystem) 00:24:58.389 Entry Flags: 00:24:58.389 Duplicate Returned Information: 0 00:24:58.389 Explicit Persistent Connection Support for Discovery: 0 00:24:58.389 Transport Requirements: 00:24:58.389 Secure Channel: Not Required 00:24:58.389 Port ID: 0 (0x0000) 00:24:58.389 Controller ID: 65535 (0xffff) 00:24:58.389 Admin Max SQ Size: 128 00:24:58.389 Transport Service Identifier: 4420 00:24:58.389 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:58.389 Transport Address: 10.0.0.2 [2024-07-12 19:21:04.354418] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:58.389 [2024-07-12 19:21:04.354429] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113ee40) on tqpair=0x10bbec0 00:24:58.389 [2024-07-12 19:21:04.354436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.389 [2024-07-12 19:21:04.354441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113efc0) on tqpair=0x10bbec0 00:24:58.389 [2024-07-12 19:21:04.354446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.389 [2024-07-12 19:21:04.354451] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f140) on tqpair=0x10bbec0 00:24:58.389 [2024-07-12 19:21:04.354455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.389 [2024-07-12 19:21:04.354460] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f2c0) on tqpair=0x10bbec0 00:24:58.389 [2024-07-12 19:21:04.354465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.389 [2024-07-12 19:21:04.354475] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.354479] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.354482] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bbec0) 00:24:58.389 [2024-07-12 19:21:04.354490] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.389 [2024-07-12 19:21:04.354504] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f2c0, cid 3, qid 0 00:24:58.389 [2024-07-12 19:21:04.354614] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.389 [2024-07-12 19:21:04.354621] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.389 [2024-07-12 19:21:04.354624] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.354628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f2c0) on tqpair=0x10bbec0 00:24:58.389 [2024-07-12 19:21:04.354637] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.354641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.354645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bbec0) 00:24:58.389 [2024-07-12 19:21:04.354652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.389 [2024-07-12 19:21:04.354665] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f2c0, cid 3, qid 0 00:24:58.389 [2024-07-12 19:21:04.354891] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.389 [2024-07-12 19:21:04.354897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.389 [2024-07-12 19:21:04.354900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.354904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f2c0) on tqpair=0x10bbec0 00:24:58.389 [2024-07-12 19:21:04.354909] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:58.389 [2024-07-12 19:21:04.354914] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:58.389 [2024-07-12 19:21:04.354923] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.354927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.354931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bbec0) 00:24:58.389 [2024-07-12 19:21:04.354937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.389 [2024-07-12 19:21:04.354947] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f2c0, cid 3, qid 0 00:24:58.389 [2024-07-12 19:21:04.359129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.389 [2024-07-12 19:21:04.359137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.389 [2024-07-12 19:21:04.359141] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.359145] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f2c0) on tqpair=0x10bbec0 00:24:58.389 [2024-07-12 19:21:04.359155] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.359159] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.359162] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10bbec0) 00:24:58.389 [2024-07-12 19:21:04.359169] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.389 [2024-07-12 19:21:04.359180] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x113f2c0, cid 3, qid 0 00:24:58.389 [2024-07-12 19:21:04.359378] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.389 [2024-07-12 19:21:04.359385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.389 [2024-07-12 19:21:04.359388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.359392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x113f2c0) on tqpair=0x10bbec0 00:24:58.389 [2024-07-12 19:21:04.359399] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 4 milliseconds 00:24:58.389 00:24:58.389 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:58.389 [2024-07-12 19:21:04.398045] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:24:58.389 [2024-07-12 19:21:04.398092] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1526498 ] 00:24:58.389 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.389 [2024-07-12 19:21:04.431646] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:58.389 [2024-07-12 19:21:04.431689] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:58.389 [2024-07-12 19:21:04.431694] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:58.389 [2024-07-12 19:21:04.431704] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:58.389 [2024-07-12 19:21:04.431710] sock.c: 357:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:58.389 [2024-07-12 19:21:04.432018] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:58.389 [2024-07-12 19:21:04.432041] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x55dec0 0 00:24:58.389 [2024-07-12 19:21:04.438129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:58.389 [2024-07-12 19:21:04.438139] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:58.389 [2024-07-12 19:21:04.438143] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:58.389 [2024-07-12 19:21:04.438146] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:58.389 [2024-07-12 19:21:04.438178] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.438183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.438187] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.389 [2024-07-12 19:21:04.438198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:58.389 [2024-07-12 19:21:04.438213] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.389 [2024-07-12 19:21:04.446131] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.389 [2024-07-12 19:21:04.446140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.389 [2024-07-12 19:21:04.446144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.446148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.389 [2024-07-12 19:21:04.446159] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:58.389 [2024-07-12 19:21:04.446165] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:58.389 [2024-07-12 19:21:04.446171] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:58.389 [2024-07-12 19:21:04.446182] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.446186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.446190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.389 [2024-07-12 19:21:04.446197] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.389 [2024-07-12 19:21:04.446209] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.389 [2024-07-12 19:21:04.446295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.389 [2024-07-12 19:21:04.446302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.389 [2024-07-12 19:21:04.446306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.446310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.389 [2024-07-12 19:21:04.446315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:58.389 [2024-07-12 19:21:04.446325] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:58.389 [2024-07-12 19:21:04.446333] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.446336] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.389 [2024-07-12 19:21:04.446340] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.389 [2024-07-12 19:21:04.446347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.390 [2024-07-12 19:21:04.446357] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.390 [2024-07-12 19:21:04.446430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.390 [2024-07-12 19:21:04.446436] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.390 [2024-07-12 19:21:04.446439] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446443] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.390 [2024-07-12 19:21:04.446449] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:58.390 [2024-07-12 19:21:04.446456] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:58.390 [2024-07-12 19:21:04.446463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446466] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446470] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.446476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.390 [2024-07-12 19:21:04.446486] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.390 [2024-07-12 19:21:04.446560] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.390 [2024-07-12 19:21:04.446566] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.390 [2024-07-12 19:21:04.446570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446573] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.390 [2024-07-12 19:21:04.446578] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:58.390 [2024-07-12 19:21:04.446587] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446591] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446595] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.446601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.390 [2024-07-12 19:21:04.446611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.390 [2024-07-12 19:21:04.446682] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.390 [2024-07-12 19:21:04.446688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.390 [2024-07-12 19:21:04.446691] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446695] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.390 [2024-07-12 19:21:04.446700] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:58.390 [2024-07-12 19:21:04.446704] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:58.390 [2024-07-12 19:21:04.446711] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:58.390 [2024-07-12 19:21:04.446819] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:58.390 [2024-07-12 19:21:04.446823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:58.390 [2024-07-12 19:21:04.446830] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446834] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446838] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.446844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.390 [2024-07-12 19:21:04.446854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.390 [2024-07-12 19:21:04.446926] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.390 [2024-07-12 19:21:04.446932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.390 [2024-07-12 19:21:04.446935] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446939] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.390 [2024-07-12 19:21:04.446944] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:58.390 [2024-07-12 19:21:04.446953] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.446960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.446967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.390 [2024-07-12 19:21:04.446976] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.390 [2024-07-12 19:21:04.447050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.390 [2024-07-12 19:21:04.447056] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.390 [2024-07-12 19:21:04.447060] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.447064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.390 [2024-07-12 19:21:04.447068] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:58.390 [2024-07-12 19:21:04.447072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:58.390 [2024-07-12 19:21:04.447080] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:58.390 [2024-07-12 19:21:04.447087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:58.390 [2024-07-12 19:21:04.447096] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.447099] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.447106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.390 [2024-07-12 19:21:04.447116] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.390 [2024-07-12 19:21:04.447221] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.390 [2024-07-12 19:21:04.447228] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.390 [2024-07-12 19:21:04.447232] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.447235] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55dec0): datao=0, datal=4096, cccid=0 00:24:58.390 [2024-07-12 19:21:04.447242] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e0e40) on tqpair(0x55dec0): expected_datao=0, payload_size=4096 00:24:58.390 [2024-07-12 19:21:04.447247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.447332] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.447336] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.390 [2024-07-12 19:21:04.493141] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.390 [2024-07-12 19:21:04.493144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493148] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.390 [2024-07-12 19:21:04.493156] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:58.390 [2024-07-12 19:21:04.493164] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:58.390 [2024-07-12 19:21:04.493169] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:58.390 [2024-07-12 19:21:04.493173] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:58.390 [2024-07-12 19:21:04.493177] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:58.390 [2024-07-12 19:21:04.493182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:58.390 [2024-07-12 19:21:04.493190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:58.390 [2024-07-12 19:21:04.493197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493201] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493205] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.493212] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:58.390 [2024-07-12 19:21:04.493225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.390 [2024-07-12 19:21:04.493304] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.390 [2024-07-12 19:21:04.493311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.390 [2024-07-12 19:21:04.493314] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.390 [2024-07-12 19:21:04.493324] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493328] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493332] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.493338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.390 [2024-07-12 19:21:04.493344] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493347] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493351] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.493356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.390 [2024-07-12 19:21:04.493362] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493366] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493372] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.493378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.390 [2024-07-12 19:21:04.493384] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493387] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.390 [2024-07-12 19:21:04.493396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.390 [2024-07-12 19:21:04.493401] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:58.390 [2024-07-12 19:21:04.493411] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:58.390 [2024-07-12 19:21:04.493418] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.390 [2024-07-12 19:21:04.493421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55dec0) 00:24:58.391 [2024-07-12 19:21:04.493428] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.391 [2024-07-12 19:21:04.493440] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0e40, cid 0, qid 0 00:24:58.391 [2024-07-12 19:21:04.493445] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e0fc0, cid 1, qid 0 00:24:58.391 [2024-07-12 19:21:04.493450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1140, cid 2, qid 0 00:24:58.391 [2024-07-12 19:21:04.493454] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.391 [2024-07-12 19:21:04.493459] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1440, cid 4, qid 0 00:24:58.391 [2024-07-12 19:21:04.493559] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.391 [2024-07-12 19:21:04.493565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.391 [2024-07-12 19:21:04.493569] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.391 [2024-07-12 19:21:04.493572] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1440) on tqpair=0x55dec0 00:24:58.391 [2024-07-12 19:21:04.493577] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:58.391 [2024-07-12 19:21:04.493582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:58.391 [2024-07-12 19:21:04.493590] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:58.391 [2024-07-12 19:21:04.493596] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:58.391 [2024-07-12 19:21:04.493603] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.391 [2024-07-12 19:21:04.493606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.391 [2024-07-12 19:21:04.493610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55dec0) 00:24:58.391 [2024-07-12 19:21:04.493616] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:58.391 [2024-07-12 19:21:04.493626] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1440, cid 4, qid 0 00:24:58.391 [2024-07-12 19:21:04.493709] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.391 [2024-07-12 19:21:04.493715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.391 [2024-07-12 19:21:04.493719] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.391 [2024-07-12 19:21:04.493725] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1440) on tqpair=0x55dec0 00:24:58.391 [2024-07-12 19:21:04.493790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:58.391 [2024-07-12 19:21:04.493800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:58.391 [2024-07-12 19:21:04.493808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.391 [2024-07-12 19:21:04.493811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55dec0) 00:24:58.391 [2024-07-12 19:21:04.493818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.391 [2024-07-12 19:21:04.493828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1440, cid 4, qid 0 00:24:58.391 [2024-07-12 19:21:04.493909] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.391 [2024-07-12 19:21:04.493915] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.391 [2024-07-12 19:21:04.493919] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.391 [2024-07-12 19:21:04.493922] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55dec0): datao=0, datal=4096, cccid=4 00:24:58.391 [2024-07-12 19:21:04.493927] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e1440) on tqpair(0x55dec0): expected_datao=0, payload_size=4096 00:24:58.391 [2024-07-12 19:21:04.493931] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.391 [2024-07-12 19:21:04.493967] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.391 [2024-07-12 19:21:04.493971] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.535230] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.655 [2024-07-12 19:21:04.535241] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.655 [2024-07-12 19:21:04.535245] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.535249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1440) on tqpair=0x55dec0 00:24:58.655 [2024-07-12 19:21:04.535260] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:58.655 [2024-07-12 19:21:04.535275] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.535284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.535292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.535296] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55dec0) 00:24:58.655 [2024-07-12 19:21:04.535303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.655 [2024-07-12 19:21:04.535315] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1440, cid 4, qid 0 00:24:58.655 [2024-07-12 19:21:04.535400] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.655 [2024-07-12 19:21:04.535406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.655 [2024-07-12 19:21:04.535410] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.535413] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55dec0): datao=0, datal=4096, cccid=4 00:24:58.655 [2024-07-12 19:21:04.535418] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e1440) on tqpair(0x55dec0): expected_datao=0, payload_size=4096 00:24:58.655 [2024-07-12 19:21:04.535422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.535458] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.535462] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.576187] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.655 [2024-07-12 19:21:04.576197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.655 [2024-07-12 19:21:04.576200] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.576204] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1440) on tqpair=0x55dec0 00:24:58.655 [2024-07-12 19:21:04.576221] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.576231] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.576239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.576243] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55dec0) 00:24:58.655 [2024-07-12 19:21:04.576250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.655 [2024-07-12 19:21:04.576261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1440, cid 4, qid 0 00:24:58.655 [2024-07-12 19:21:04.576342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.655 [2024-07-12 19:21:04.576349] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.655 [2024-07-12 19:21:04.576352] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.576355] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55dec0): datao=0, datal=4096, cccid=4 00:24:58.655 [2024-07-12 19:21:04.576360] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e1440) on tqpair(0x55dec0): expected_datao=0, payload_size=4096 00:24:58.655 [2024-07-12 19:21:04.576364] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.576400] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.576404] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621130] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.655 [2024-07-12 19:21:04.621140] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.655 [2024-07-12 19:21:04.621143] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621147] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1440) on tqpair=0x55dec0 00:24:58.655 [2024-07-12 19:21:04.621155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.621164] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.621173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.621180] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.621185] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.621190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.621195] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:58.655 [2024-07-12 19:21:04.621200] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:58.655 [2024-07-12 19:21:04.621205] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:58.655 [2024-07-12 19:21:04.621219] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621225] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55dec0) 00:24:58.655 [2024-07-12 19:21:04.621232] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.655 [2024-07-12 19:21:04.621239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55dec0) 00:24:58.655 [2024-07-12 19:21:04.621252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:58.655 [2024-07-12 19:21:04.621266] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1440, cid 4, qid 0 00:24:58.655 [2024-07-12 19:21:04.621271] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e15c0, cid 5, qid 0 00:24:58.655 [2024-07-12 19:21:04.621362] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.655 [2024-07-12 19:21:04.621369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.655 [2024-07-12 19:21:04.621372] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1440) on tqpair=0x55dec0 00:24:58.655 [2024-07-12 19:21:04.621383] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.655 [2024-07-12 19:21:04.621389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.655 [2024-07-12 19:21:04.621392] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e15c0) on tqpair=0x55dec0 00:24:58.655 [2024-07-12 19:21:04.621405] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621409] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55dec0) 00:24:58.655 [2024-07-12 19:21:04.621415] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.655 [2024-07-12 19:21:04.621425] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e15c0, cid 5, qid 0 00:24:58.655 [2024-07-12 19:21:04.621502] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.655 [2024-07-12 19:21:04.621509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.655 [2024-07-12 19:21:04.621512] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621516] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e15c0) on tqpair=0x55dec0 00:24:58.655 [2024-07-12 19:21:04.621524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55dec0) 00:24:58.655 [2024-07-12 19:21:04.621534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.655 [2024-07-12 19:21:04.621544] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e15c0, cid 5, qid 0 00:24:58.655 [2024-07-12 19:21:04.621659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.655 [2024-07-12 19:21:04.621665] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.655 [2024-07-12 19:21:04.621669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.655 [2024-07-12 19:21:04.621672] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e15c0) on tqpair=0x55dec0 00:24:58.656 [2024-07-12 19:21:04.621681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.621685] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55dec0) 00:24:58.656 [2024-07-12 19:21:04.621691] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.656 [2024-07-12 19:21:04.621703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e15c0, cid 5, qid 0 00:24:58.656 [2024-07-12 19:21:04.621812] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.656 [2024-07-12 19:21:04.621818] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.656 [2024-07-12 19:21:04.621822] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.621826] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e15c0) on tqpair=0x55dec0 00:24:58.656 [2024-07-12 19:21:04.621840] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.621844] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x55dec0) 00:24:58.656 [2024-07-12 19:21:04.621850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.656 [2024-07-12 19:21:04.621857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.621861] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x55dec0) 00:24:58.656 [2024-07-12 19:21:04.621867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.656 [2024-07-12 19:21:04.621874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.621878] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x55dec0) 00:24:58.656 [2024-07-12 19:21:04.621884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.656 [2024-07-12 19:21:04.621891] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.621895] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x55dec0) 00:24:58.656 [2024-07-12 19:21:04.621901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.656 [2024-07-12 19:21:04.621912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e15c0, cid 5, qid 0 00:24:58.656 [2024-07-12 19:21:04.621917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1440, cid 4, qid 0 00:24:58.656 [2024-07-12 19:21:04.621921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e1740, cid 6, qid 0 00:24:58.656 [2024-07-12 19:21:04.621926] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e18c0, cid 7, qid 0 00:24:58.656 [2024-07-12 19:21:04.622059] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.656 [2024-07-12 19:21:04.622065] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.656 [2024-07-12 19:21:04.622069] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622072] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55dec0): datao=0, datal=8192, cccid=5 00:24:58.656 [2024-07-12 19:21:04.622077] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e15c0) on tqpair(0x55dec0): expected_datao=0, payload_size=8192 00:24:58.656 [2024-07-12 19:21:04.622081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622169] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622175] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622180] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.656 [2024-07-12 19:21:04.622186] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.656 [2024-07-12 19:21:04.622189] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622193] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55dec0): datao=0, datal=512, cccid=4 00:24:58.656 [2024-07-12 19:21:04.622197] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e1440) on tqpair(0x55dec0): expected_datao=0, payload_size=512 00:24:58.656 [2024-07-12 19:21:04.622204] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622210] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622214] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.656 [2024-07-12 19:21:04.622225] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.656 [2024-07-12 19:21:04.622228] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622232] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55dec0): datao=0, datal=512, cccid=6 00:24:58.656 [2024-07-12 19:21:04.622236] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e1740) on tqpair(0x55dec0): expected_datao=0, payload_size=512 00:24:58.656 [2024-07-12 19:21:04.622240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622246] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622250] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622255] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:58.656 [2024-07-12 19:21:04.622261] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:58.656 [2024-07-12 19:21:04.622264] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622267] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x55dec0): datao=0, datal=4096, cccid=7 00:24:58.656 [2024-07-12 19:21:04.622272] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x5e18c0) on tqpair(0x55dec0): expected_datao=0, payload_size=4096 00:24:58.656 [2024-07-12 19:21:04.622276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622283] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622286] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622324] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.656 [2024-07-12 19:21:04.622330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.656 [2024-07-12 19:21:04.622333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e15c0) on tqpair=0x55dec0 00:24:58.656 [2024-07-12 19:21:04.622350] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.656 [2024-07-12 19:21:04.622355] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.656 [2024-07-12 19:21:04.622359] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622362] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1440) on tqpair=0x55dec0 00:24:58.656 [2024-07-12 19:21:04.622372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.656 [2024-07-12 19:21:04.622378] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.656 [2024-07-12 19:21:04.622381] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622385] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1740) on tqpair=0x55dec0 00:24:58.656 [2024-07-12 19:21:04.622392] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.656 [2024-07-12 19:21:04.622398] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.656 [2024-07-12 19:21:04.622401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.656 [2024-07-12 19:21:04.622405] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e18c0) on tqpair=0x55dec0 00:24:58.656 ===================================================== 00:24:58.656 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:58.656 ===================================================== 00:24:58.656 Controller Capabilities/Features 00:24:58.656 ================================ 00:24:58.656 Vendor ID: 8086 00:24:58.656 Subsystem Vendor ID: 8086 00:24:58.656 Serial Number: SPDK00000000000001 00:24:58.656 Model Number: SPDK bdev Controller 00:24:58.656 Firmware Version: 24.09 00:24:58.656 Recommended Arb Burst: 6 00:24:58.656 IEEE OUI Identifier: e4 d2 5c 00:24:58.656 Multi-path I/O 00:24:58.656 May have multiple subsystem ports: Yes 00:24:58.656 May have multiple controllers: Yes 00:24:58.656 Associated with SR-IOV VF: No 00:24:58.656 Max Data Transfer Size: 131072 00:24:58.656 Max Number of Namespaces: 32 00:24:58.656 Max Number of I/O Queues: 127 00:24:58.656 NVMe Specification Version (VS): 1.3 00:24:58.656 NVMe Specification Version (Identify): 1.3 00:24:58.656 Maximum Queue Entries: 128 00:24:58.656 Contiguous Queues Required: Yes 00:24:58.656 Arbitration Mechanisms Supported 00:24:58.656 Weighted Round Robin: Not Supported 00:24:58.656 Vendor Specific: Not Supported 00:24:58.656 Reset Timeout: 15000 ms 00:24:58.656 Doorbell Stride: 4 bytes 00:24:58.656 NVM Subsystem Reset: Not Supported 00:24:58.656 Command Sets Supported 00:24:58.656 NVM Command Set: Supported 00:24:58.656 Boot Partition: Not Supported 00:24:58.656 Memory Page Size Minimum: 4096 bytes 00:24:58.656 Memory Page Size Maximum: 4096 bytes 00:24:58.656 Persistent Memory Region: Not Supported 00:24:58.656 Optional Asynchronous Events Supported 00:24:58.656 Namespace Attribute Notices: Supported 00:24:58.656 Firmware Activation Notices: Not Supported 00:24:58.656 ANA Change Notices: Not Supported 00:24:58.656 PLE Aggregate Log Change Notices: Not Supported 00:24:58.656 LBA Status Info Alert Notices: Not Supported 00:24:58.656 EGE Aggregate Log Change Notices: Not Supported 00:24:58.656 Normal NVM Subsystem Shutdown event: Not Supported 00:24:58.656 Zone Descriptor Change Notices: Not Supported 00:24:58.656 Discovery Log Change Notices: Not Supported 00:24:58.656 Controller Attributes 00:24:58.656 128-bit Host Identifier: Supported 00:24:58.656 Non-Operational Permissive Mode: Not Supported 00:24:58.656 NVM Sets: Not Supported 00:24:58.656 Read Recovery Levels: Not Supported 00:24:58.656 Endurance Groups: Not Supported 00:24:58.656 Predictable Latency Mode: Not Supported 00:24:58.656 Traffic Based Keep ALive: Not Supported 00:24:58.656 Namespace Granularity: Not Supported 00:24:58.656 SQ Associations: Not Supported 00:24:58.656 UUID List: Not Supported 00:24:58.656 Multi-Domain Subsystem: Not Supported 00:24:58.656 Fixed Capacity Management: Not Supported 00:24:58.656 Variable Capacity Management: Not Supported 00:24:58.656 Delete Endurance Group: Not Supported 00:24:58.656 Delete NVM Set: Not Supported 00:24:58.656 Extended LBA Formats Supported: Not Supported 00:24:58.656 Flexible Data Placement Supported: Not Supported 00:24:58.656 00:24:58.656 Controller Memory Buffer Support 00:24:58.656 ================================ 00:24:58.656 Supported: No 00:24:58.656 00:24:58.657 Persistent Memory Region Support 00:24:58.657 ================================ 00:24:58.657 Supported: No 00:24:58.657 00:24:58.657 Admin Command Set Attributes 00:24:58.657 ============================ 00:24:58.657 Security Send/Receive: Not Supported 00:24:58.657 Format NVM: Not Supported 00:24:58.657 Firmware Activate/Download: Not Supported 00:24:58.657 Namespace Management: Not Supported 00:24:58.657 Device Self-Test: Not Supported 00:24:58.657 Directives: Not Supported 00:24:58.657 NVMe-MI: Not Supported 00:24:58.657 Virtualization Management: Not Supported 00:24:58.657 Doorbell Buffer Config: Not Supported 00:24:58.657 Get LBA Status Capability: Not Supported 00:24:58.657 Command & Feature Lockdown Capability: Not Supported 00:24:58.657 Abort Command Limit: 4 00:24:58.657 Async Event Request Limit: 4 00:24:58.657 Number of Firmware Slots: N/A 00:24:58.657 Firmware Slot 1 Read-Only: N/A 00:24:58.657 Firmware Activation Without Reset: N/A 00:24:58.657 Multiple Update Detection Support: N/A 00:24:58.657 Firmware Update Granularity: No Information Provided 00:24:58.657 Per-Namespace SMART Log: No 00:24:58.657 Asymmetric Namespace Access Log Page: Not Supported 00:24:58.657 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:58.657 Command Effects Log Page: Supported 00:24:58.657 Get Log Page Extended Data: Supported 00:24:58.657 Telemetry Log Pages: Not Supported 00:24:58.657 Persistent Event Log Pages: Not Supported 00:24:58.657 Supported Log Pages Log Page: May Support 00:24:58.657 Commands Supported & Effects Log Page: Not Supported 00:24:58.657 Feature Identifiers & Effects Log Page:May Support 00:24:58.657 NVMe-MI Commands & Effects Log Page: May Support 00:24:58.657 Data Area 4 for Telemetry Log: Not Supported 00:24:58.657 Error Log Page Entries Supported: 128 00:24:58.657 Keep Alive: Supported 00:24:58.657 Keep Alive Granularity: 10000 ms 00:24:58.657 00:24:58.657 NVM Command Set Attributes 00:24:58.657 ========================== 00:24:58.657 Submission Queue Entry Size 00:24:58.657 Max: 64 00:24:58.657 Min: 64 00:24:58.657 Completion Queue Entry Size 00:24:58.657 Max: 16 00:24:58.657 Min: 16 00:24:58.657 Number of Namespaces: 32 00:24:58.657 Compare Command: Supported 00:24:58.657 Write Uncorrectable Command: Not Supported 00:24:58.657 Dataset Management Command: Supported 00:24:58.657 Write Zeroes Command: Supported 00:24:58.657 Set Features Save Field: Not Supported 00:24:58.657 Reservations: Supported 00:24:58.657 Timestamp: Not Supported 00:24:58.657 Copy: Supported 00:24:58.657 Volatile Write Cache: Present 00:24:58.657 Atomic Write Unit (Normal): 1 00:24:58.657 Atomic Write Unit (PFail): 1 00:24:58.657 Atomic Compare & Write Unit: 1 00:24:58.657 Fused Compare & Write: Supported 00:24:58.657 Scatter-Gather List 00:24:58.657 SGL Command Set: Supported 00:24:58.657 SGL Keyed: Supported 00:24:58.657 SGL Bit Bucket Descriptor: Not Supported 00:24:58.657 SGL Metadata Pointer: Not Supported 00:24:58.657 Oversized SGL: Not Supported 00:24:58.657 SGL Metadata Address: Not Supported 00:24:58.657 SGL Offset: Supported 00:24:58.657 Transport SGL Data Block: Not Supported 00:24:58.657 Replay Protected Memory Block: Not Supported 00:24:58.657 00:24:58.657 Firmware Slot Information 00:24:58.657 ========================= 00:24:58.657 Active slot: 1 00:24:58.657 Slot 1 Firmware Revision: 24.09 00:24:58.657 00:24:58.657 00:24:58.657 Commands Supported and Effects 00:24:58.657 ============================== 00:24:58.657 Admin Commands 00:24:58.657 -------------- 00:24:58.657 Get Log Page (02h): Supported 00:24:58.657 Identify (06h): Supported 00:24:58.657 Abort (08h): Supported 00:24:58.657 Set Features (09h): Supported 00:24:58.657 Get Features (0Ah): Supported 00:24:58.657 Asynchronous Event Request (0Ch): Supported 00:24:58.657 Keep Alive (18h): Supported 00:24:58.657 I/O Commands 00:24:58.657 ------------ 00:24:58.657 Flush (00h): Supported LBA-Change 00:24:58.657 Write (01h): Supported LBA-Change 00:24:58.657 Read (02h): Supported 00:24:58.657 Compare (05h): Supported 00:24:58.657 Write Zeroes (08h): Supported LBA-Change 00:24:58.657 Dataset Management (09h): Supported LBA-Change 00:24:58.657 Copy (19h): Supported LBA-Change 00:24:58.657 00:24:58.657 Error Log 00:24:58.657 ========= 00:24:58.657 00:24:58.657 Arbitration 00:24:58.657 =========== 00:24:58.657 Arbitration Burst: 1 00:24:58.657 00:24:58.657 Power Management 00:24:58.657 ================ 00:24:58.657 Number of Power States: 1 00:24:58.657 Current Power State: Power State #0 00:24:58.657 Power State #0: 00:24:58.657 Max Power: 0.00 W 00:24:58.657 Non-Operational State: Operational 00:24:58.657 Entry Latency: Not Reported 00:24:58.657 Exit Latency: Not Reported 00:24:58.657 Relative Read Throughput: 0 00:24:58.657 Relative Read Latency: 0 00:24:58.657 Relative Write Throughput: 0 00:24:58.657 Relative Write Latency: 0 00:24:58.657 Idle Power: Not Reported 00:24:58.657 Active Power: Not Reported 00:24:58.657 Non-Operational Permissive Mode: Not Supported 00:24:58.657 00:24:58.657 Health Information 00:24:58.657 ================== 00:24:58.657 Critical Warnings: 00:24:58.657 Available Spare Space: OK 00:24:58.657 Temperature: OK 00:24:58.657 Device Reliability: OK 00:24:58.657 Read Only: No 00:24:58.657 Volatile Memory Backup: OK 00:24:58.657 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:58.657 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:58.657 Available Spare: 0% 00:24:58.657 Available Spare Threshold: 0% 00:24:58.657 Life Percentage Used:[2024-07-12 19:21:04.622502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622507] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x55dec0) 00:24:58.657 [2024-07-12 19:21:04.622514] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.657 [2024-07-12 19:21:04.622526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e18c0, cid 7, qid 0 00:24:58.657 [2024-07-12 19:21:04.622616] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.657 [2024-07-12 19:21:04.622623] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.657 [2024-07-12 19:21:04.622626] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622630] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e18c0) on tqpair=0x55dec0 00:24:58.657 [2024-07-12 19:21:04.622661] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:58.657 [2024-07-12 19:21:04.622670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0e40) on tqpair=0x55dec0 00:24:58.657 [2024-07-12 19:21:04.622676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.657 [2024-07-12 19:21:04.622681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e0fc0) on tqpair=0x55dec0 00:24:58.657 [2024-07-12 19:21:04.622686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.657 [2024-07-12 19:21:04.622691] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e1140) on tqpair=0x55dec0 00:24:58.657 [2024-07-12 19:21:04.622695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.657 [2024-07-12 19:21:04.622700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.657 [2024-07-12 19:21:04.622704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:58.657 [2024-07-12 19:21:04.622712] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622716] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622719] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.657 [2024-07-12 19:21:04.622726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.657 [2024-07-12 19:21:04.622738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.657 [2024-07-12 19:21:04.622827] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.657 [2024-07-12 19:21:04.622833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.657 [2024-07-12 19:21:04.622837] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622841] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.657 [2024-07-12 19:21:04.622848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622851] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622855] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.657 [2024-07-12 19:21:04.622861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.657 [2024-07-12 19:21:04.622874] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.657 [2024-07-12 19:21:04.622954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.657 [2024-07-12 19:21:04.622960] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.657 [2024-07-12 19:21:04.622964] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622967] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.657 [2024-07-12 19:21:04.622972] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:58.657 [2024-07-12 19:21:04.622976] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:58.657 [2024-07-12 19:21:04.622986] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.657 [2024-07-12 19:21:04.622995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.657 [2024-07-12 19:21:04.623002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.623012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.623135] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.623142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.623145] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623149] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.623159] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623163] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623166] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.623173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.623183] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.623281] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.623287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.623291] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623295] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.623304] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623312] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.623322] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.623334] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.623435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.623441] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.623444] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623448] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.623457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623461] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.623471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.623481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.623551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.623557] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.623561] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.623574] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.623590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.623600] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.623684] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.623690] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.623694] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.623707] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623711] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.623721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.623731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.623836] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.623842] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.623845] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623849] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.623858] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.623865] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.623872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.623881] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.623986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.623993] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.623996] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.624009] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624013] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624016] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.624023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.624032] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.624100] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.624106] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.624109] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624113] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.624126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.624140] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.624152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.624238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.624245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.624248] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624252] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.624262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624266] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624269] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.624276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.624285] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.624390] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.624396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.624399] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624403] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.624412] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624416] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624420] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.624426] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.624436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.624540] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.624547] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.624550] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624554] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.624563] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624570] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.624577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.624586] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.624653] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.624660] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.658 [2024-07-12 19:21:04.624663] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624667] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.658 [2024-07-12 19:21:04.624676] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624680] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.658 [2024-07-12 19:21:04.624684] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.658 [2024-07-12 19:21:04.624690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.658 [2024-07-12 19:21:04.624702] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.658 [2024-07-12 19:21:04.624795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.658 [2024-07-12 19:21:04.624801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.659 [2024-07-12 19:21:04.624804] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.624808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.659 [2024-07-12 19:21:04.624817] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.624821] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.624825] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.659 [2024-07-12 19:21:04.624831] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.659 [2024-07-12 19:21:04.624840] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.659 [2024-07-12 19:21:04.624945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.659 [2024-07-12 19:21:04.624951] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.659 [2024-07-12 19:21:04.624955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.624959] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.659 [2024-07-12 19:21:04.624968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.624972] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.624976] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.659 [2024-07-12 19:21:04.624982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.659 [2024-07-12 19:21:04.624992] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.659 [2024-07-12 19:21:04.625096] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.659 [2024-07-12 19:21:04.625102] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.659 [2024-07-12 19:21:04.625106] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.625109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.659 [2024-07-12 19:21:04.625119] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.629129] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.629134] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x55dec0) 00:24:58.659 [2024-07-12 19:21:04.629141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:58.659 [2024-07-12 19:21:04.629152] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x5e12c0, cid 3, qid 0 00:24:58.659 [2024-07-12 19:21:04.629228] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:58.659 [2024-07-12 19:21:04.629235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:58.659 [2024-07-12 19:21:04.629238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:58.659 [2024-07-12 19:21:04.629242] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x5e12c0) on tqpair=0x55dec0 00:24:58.659 [2024-07-12 19:21:04.629249] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:58.659 0% 00:24:58.659 Data Units Read: 0 00:24:58.659 Data Units Written: 0 00:24:58.659 Host Read Commands: 0 00:24:58.659 Host Write Commands: 0 00:24:58.659 Controller Busy Time: 0 minutes 00:24:58.659 Power Cycles: 0 00:24:58.659 Power On Hours: 0 hours 00:24:58.659 Unsafe Shutdowns: 0 00:24:58.659 Unrecoverable Media Errors: 0 00:24:58.659 Lifetime Error Log Entries: 0 00:24:58.659 Warning Temperature Time: 0 minutes 00:24:58.659 Critical Temperature Time: 0 minutes 00:24:58.659 00:24:58.659 Number of Queues 00:24:58.659 ================ 00:24:58.659 Number of I/O Submission Queues: 127 00:24:58.659 Number of I/O Completion Queues: 127 00:24:58.659 00:24:58.659 Active Namespaces 00:24:58.659 ================= 00:24:58.659 Namespace ID:1 00:24:58.659 Error Recovery Timeout: Unlimited 00:24:58.659 Command Set Identifier: NVM (00h) 00:24:58.659 Deallocate: Supported 00:24:58.659 Deallocated/Unwritten Error: Not Supported 00:24:58.659 Deallocated Read Value: Unknown 00:24:58.659 Deallocate in Write Zeroes: Not Supported 00:24:58.659 Deallocated Guard Field: 0xFFFF 00:24:58.659 Flush: Supported 00:24:58.659 Reservation: Supported 00:24:58.659 Namespace Sharing Capabilities: Multiple Controllers 00:24:58.659 Size (in LBAs): 131072 (0GiB) 00:24:58.659 Capacity (in LBAs): 131072 (0GiB) 00:24:58.659 Utilization (in LBAs): 131072 (0GiB) 00:24:58.659 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:58.659 EUI64: ABCDEF0123456789 00:24:58.659 UUID: 9020e99b-cdf5-4eb0-b615-b0d3146095b2 00:24:58.659 Thin Provisioning: Not Supported 00:24:58.659 Per-NS Atomic Units: Yes 00:24:58.659 Atomic Boundary Size (Normal): 0 00:24:58.659 Atomic Boundary Size (PFail): 0 00:24:58.659 Atomic Boundary Offset: 0 00:24:58.659 Maximum Single Source Range Length: 65535 00:24:58.659 Maximum Copy Length: 65535 00:24:58.659 Maximum Source Range Count: 1 00:24:58.659 NGUID/EUI64 Never Reused: No 00:24:58.659 Namespace Write Protected: No 00:24:58.659 Number of LBA Formats: 1 00:24:58.659 Current LBA Format: LBA Format #00 00:24:58.659 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:58.659 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:58.659 rmmod nvme_tcp 00:24:58.659 rmmod nvme_fabrics 00:24:58.659 rmmod nvme_keyring 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1526307 ']' 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1526307 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1526307 ']' 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1526307 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1526307 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1526307' 00:24:58.659 killing process with pid 1526307 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1526307 00:24:58.659 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1526307 00:24:58.919 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:58.919 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:58.919 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:58.919 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.919 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:58.919 19:21:04 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.919 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.919 19:21:04 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.464 19:21:06 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:01.464 00:25:01.464 real 0m11.150s 00:25:01.464 user 0m8.249s 00:25:01.464 sys 0m5.804s 00:25:01.464 19:21:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:01.464 19:21:06 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:01.464 ************************************ 00:25:01.464 END TEST nvmf_identify 00:25:01.464 ************************************ 00:25:01.464 19:21:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:01.464 19:21:07 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:01.464 19:21:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:01.464 19:21:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:01.464 19:21:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:01.464 ************************************ 00:25:01.464 START TEST nvmf_perf 00:25:01.464 ************************************ 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:01.464 * Looking for test storage... 00:25:01.464 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:25:01.464 19:21:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:08.055 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:08.055 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:08.055 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:08.055 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:08.055 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:08.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:08.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:25:08.316 00:25:08.316 --- 10.0.0.2 ping statistics --- 00:25:08.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.316 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:08.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:08.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:25:08.316 00:25:08.316 --- 10.0.0.1 ping statistics --- 00:25:08.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:08.316 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1531268 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1531268 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1531268 ']' 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:08.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:08.316 19:21:14 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:08.576 [2024-07-12 19:21:14.476940] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:25:08.576 [2024-07-12 19:21:14.476990] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:08.576 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.576 [2024-07-12 19:21:14.542543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:08.576 [2024-07-12 19:21:14.607466] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:08.576 [2024-07-12 19:21:14.607505] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:08.576 [2024-07-12 19:21:14.607513] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:08.576 [2024-07-12 19:21:14.607519] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:08.576 [2024-07-12 19:21:14.607525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:08.576 [2024-07-12 19:21:14.607664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:08.576 [2024-07-12 19:21:14.607776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:08.576 [2024-07-12 19:21:14.607932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.576 [2024-07-12 19:21:14.607933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:09.146 19:21:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:09.146 19:21:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:25:09.146 19:21:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:09.146 19:21:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:09.146 19:21:15 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:09.407 19:21:15 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.407 19:21:15 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:09.407 19:21:15 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:09.667 19:21:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:09.667 19:21:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:09.927 19:21:15 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:09.927 19:21:15 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:10.187 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:10.187 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:10.187 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:10.187 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:10.187 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:10.187 [2024-07-12 19:21:16.261368] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.187 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:10.447 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:10.447 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:10.706 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:10.707 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:10.707 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:10.967 [2024-07-12 19:21:16.943967] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.967 19:21:16 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:11.226 19:21:17 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:11.226 19:21:17 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:11.226 19:21:17 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:11.227 19:21:17 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:12.609 Initializing NVMe Controllers 00:25:12.609 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:12.609 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:12.609 Initialization complete. Launching workers. 00:25:12.609 ======================================================== 00:25:12.609 Latency(us) 00:25:12.609 Device Information : IOPS MiB/s Average min max 00:25:12.609 PCIE (0000:65:00.0) NSID 1 from core 0: 79541.90 310.71 401.91 66.17 8300.91 00:25:12.609 ======================================================== 00:25:12.609 Total : 79541.90 310.71 401.91 66.17 8300.91 00:25:12.609 00:25:12.609 19:21:18 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:12.609 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.994 Initializing NVMe Controllers 00:25:13.994 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:13.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:13.994 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:13.994 Initialization complete. Launching workers. 00:25:13.994 ======================================================== 00:25:13.994 Latency(us) 00:25:13.994 Device Information : IOPS MiB/s Average min max 00:25:13.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 108.00 0.42 9449.61 349.49 45385.26 00:25:13.994 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17939.42 7955.68 47905.38 00:25:13.994 ======================================================== 00:25:13.994 Total : 164.00 0.64 12348.57 349.49 47905.38 00:25:13.994 00:25:13.994 19:21:19 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:13.994 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.052 Initializing NVMe Controllers 00:25:15.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:15.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:15.052 Initialization complete. Launching workers. 00:25:15.052 ======================================================== 00:25:15.052 Latency(us) 00:25:15.052 Device Information : IOPS MiB/s Average min max 00:25:15.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10123.48 39.54 3174.31 540.45 6688.41 00:25:15.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3731.81 14.58 8620.33 6256.89 16261.66 00:25:15.052 ======================================================== 00:25:15.052 Total : 13855.29 54.12 4641.15 540.45 16261.66 00:25:15.052 00:25:15.052 19:21:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:15.052 19:21:21 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:15.052 19:21:21 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:15.052 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.597 Initializing NVMe Controllers 00:25:17.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.597 Controller IO queue size 128, less than required. 00:25:17.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.597 Controller IO queue size 128, less than required. 00:25:17.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:17.597 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:17.597 Initialization complete. Launching workers. 00:25:17.597 ======================================================== 00:25:17.597 Latency(us) 00:25:17.597 Device Information : IOPS MiB/s Average min max 00:25:17.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 934.16 233.54 140393.53 83315.33 236416.55 00:25:17.597 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.49 142.37 234758.78 74860.73 376379.98 00:25:17.597 ======================================================== 00:25:17.597 Total : 1503.65 375.91 176133.19 74860.73 376379.98 00:25:17.597 00:25:17.597 19:21:23 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:17.597 EAL: No free 2048 kB hugepages reported on node 1 00:25:17.597 No valid NVMe controllers or AIO or URING devices found 00:25:17.597 Initializing NVMe Controllers 00:25:17.597 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:17.597 Controller IO queue size 128, less than required. 00:25:17.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.597 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:17.597 Controller IO queue size 128, less than required. 00:25:17.597 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:17.597 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:17.597 WARNING: Some requested NVMe devices were skipped 00:25:17.597 19:21:23 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:17.858 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.431 Initializing NVMe Controllers 00:25:20.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:20.431 Controller IO queue size 128, less than required. 00:25:20.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.431 Controller IO queue size 128, less than required. 00:25:20.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:20.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:20.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:20.431 Initialization complete. Launching workers. 00:25:20.431 00:25:20.431 ==================== 00:25:20.431 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:20.431 TCP transport: 00:25:20.431 polls: 25629 00:25:20.431 idle_polls: 9772 00:25:20.431 sock_completions: 15857 00:25:20.431 nvme_completions: 8565 00:25:20.431 submitted_requests: 12830 00:25:20.431 queued_requests: 1 00:25:20.431 00:25:20.431 ==================== 00:25:20.431 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:20.431 TCP transport: 00:25:20.431 polls: 28692 00:25:20.431 idle_polls: 13414 00:25:20.431 sock_completions: 15278 00:25:20.431 nvme_completions: 3653 00:25:20.431 submitted_requests: 5474 00:25:20.431 queued_requests: 1 00:25:20.431 ======================================================== 00:25:20.431 Latency(us) 00:25:20.431 Device Information : IOPS MiB/s Average min max 00:25:20.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2137.30 534.33 60457.43 33767.75 97133.60 00:25:20.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 911.42 227.86 143966.58 55098.45 220317.66 00:25:20.431 ======================================================== 00:25:20.431 Total : 3048.72 762.18 85422.67 33767.75 220317.66 00:25:20.431 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:20.431 rmmod nvme_tcp 00:25:20.431 rmmod nvme_fabrics 00:25:20.431 rmmod nvme_keyring 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1531268 ']' 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1531268 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1531268 ']' 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1531268 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1531268 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1531268' 00:25:20.431 killing process with pid 1531268 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1531268 00:25:20.431 19:21:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1531268 00:25:22.974 19:21:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:22.974 19:21:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:22.974 19:21:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:22.974 19:21:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:22.974 19:21:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:22.974 19:21:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.974 19:21:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:22.974 19:21:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.889 19:21:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:24.889 00:25:24.889 real 0m23.494s 00:25:24.889 user 0m56.128s 00:25:24.889 sys 0m7.948s 00:25:24.889 19:21:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:24.889 19:21:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:24.889 ************************************ 00:25:24.889 END TEST nvmf_perf 00:25:24.889 ************************************ 00:25:24.889 19:21:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:24.889 19:21:30 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:24.889 19:21:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:24.889 19:21:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:24.889 19:21:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:24.889 ************************************ 00:25:24.889 START TEST nvmf_fio_host 00:25:24.889 ************************************ 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:24.889 * Looking for test storage... 00:25:24.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.889 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:24.890 19:21:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:33.035 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:33.035 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:33.035 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:33.035 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:33.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.479 ms 00:25:33.035 00:25:33.035 --- 10.0.0.2 ping statistics --- 00:25:33.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.035 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:25:33.035 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.318 ms 00:25:33.036 00:25:33.036 --- 10.0.0.1 ping statistics --- 00:25:33.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.036 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:33.036 19:21:37 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1538017 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1538017 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1538017 ']' 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.036 [2024-07-12 19:21:38.094250] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:25:33.036 [2024-07-12 19:21:38.094317] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.036 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.036 [2024-07-12 19:21:38.164964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:33.036 [2024-07-12 19:21:38.241080] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.036 [2024-07-12 19:21:38.241118] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.036 [2024-07-12 19:21:38.241131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.036 [2024-07-12 19:21:38.241138] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.036 [2024-07-12 19:21:38.241143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.036 [2024-07-12 19:21:38.241319] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:33.036 [2024-07-12 19:21:38.241494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:33.036 [2024-07-12 19:21:38.241630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:33.036 [2024-07-12 19:21:38.241631] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:33.036 19:21:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:33.036 [2024-07-12 19:21:39.009081] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.036 19:21:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:33.036 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:33.036 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.036 19:21:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:33.296 Malloc1 00:25:33.296 19:21:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:33.556 19:21:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:33.556 19:21:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.816 [2024-07-12 19:21:39.730590] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.816 19:21:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:33.816 19:21:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:33.817 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:34.104 19:21:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:34.368 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:34.368 fio-3.35 00:25:34.368 Starting 1 thread 00:25:34.368 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.912 00:25:36.912 test: (groupid=0, jobs=1): err= 0: pid=1538836: Fri Jul 12 19:21:42 2024 00:25:36.912 read: IOPS=13.6k, BW=53.3MiB/s (55.9MB/s)(107MiB/2004msec) 00:25:36.912 slat (usec): min=2, max=223, avg= 2.16, stdev= 1.93 00:25:36.912 clat (usec): min=3363, max=8730, avg=5198.40, stdev=670.31 00:25:36.912 lat (usec): min=3400, max=8732, avg=5200.56, stdev=670.36 00:25:36.912 clat percentiles (usec): 00:25:36.912 | 1.00th=[ 4113], 5.00th=[ 4424], 10.00th=[ 4555], 20.00th=[ 4752], 00:25:36.912 | 30.00th=[ 4883], 40.00th=[ 4948], 50.00th=[ 5080], 60.00th=[ 5211], 00:25:36.912 | 70.00th=[ 5342], 80.00th=[ 5473], 90.00th=[ 5800], 95.00th=[ 6783], 00:25:36.912 | 99.00th=[ 7701], 99.50th=[ 7963], 99.90th=[ 8356], 99.95th=[ 8455], 00:25:36.912 | 99.99th=[ 8586] 00:25:36.912 bw ( KiB/s): min=48808, max=56480, per=99.94%, avg=54520.00, stdev=3808.24, samples=4 00:25:36.912 iops : min=12202, max=14120, avg=13630.00, stdev=952.06, samples=4 00:25:36.912 write: IOPS=13.6k, BW=53.2MiB/s (55.8MB/s)(107MiB/2004msec); 0 zone resets 00:25:36.912 slat (usec): min=2, max=212, avg= 2.26, stdev= 1.41 00:25:36.912 clat (usec): min=2323, max=7678, avg=4123.81, stdev=551.01 00:25:36.912 lat (usec): min=2337, max=7680, avg=4126.07, stdev=551.09 00:25:36.912 clat percentiles (usec): 00:25:36.912 | 1.00th=[ 3130], 5.00th=[ 3458], 10.00th=[ 3621], 20.00th=[ 3785], 00:25:36.912 | 30.00th=[ 3884], 40.00th=[ 3982], 50.00th=[ 4047], 60.00th=[ 4146], 00:25:36.912 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4555], 95.00th=[ 5473], 00:25:36.912 | 99.00th=[ 6259], 99.50th=[ 6456], 99.90th=[ 6783], 99.95th=[ 6915], 00:25:36.912 | 99.99th=[ 7635] 00:25:36.912 bw ( KiB/s): min=49424, max=56320, per=99.99%, avg=54496.00, stdev=3382.65, samples=4 00:25:36.912 iops : min=12356, max=14080, avg=13624.00, stdev=845.66, samples=4 00:25:36.912 lat (msec) : 4=21.54%, 10=78.46% 00:25:36.912 cpu : usr=69.45%, sys=24.86%, ctx=43, majf=0, minf=7 00:25:36.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:36.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:36.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:36.912 issued rwts: total=27331,27304,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:36.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:36.912 00:25:36.912 Run status group 0 (all jobs): 00:25:36.912 READ: bw=53.3MiB/s (55.9MB/s), 53.3MiB/s-53.3MiB/s (55.9MB/s-55.9MB/s), io=107MiB (112MB), run=2004-2004msec 00:25:36.912 WRITE: bw=53.2MiB/s (55.8MB/s), 53.2MiB/s-53.2MiB/s (55.8MB/s-55.8MB/s), io=107MiB (112MB), run=2004-2004msec 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:36.912 19:21:42 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:36.912 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:36.912 fio-3.35 00:25:36.912 Starting 1 thread 00:25:36.912 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.470 00:25:39.470 test: (groupid=0, jobs=1): err= 0: pid=1539359: Fri Jul 12 19:21:45 2024 00:25:39.470 read: IOPS=8985, BW=140MiB/s (147MB/s)(282MiB/2009msec) 00:25:39.470 slat (usec): min=3, max=114, avg= 3.64, stdev= 1.66 00:25:39.470 clat (usec): min=2479, max=20409, avg=8643.34, stdev=2059.48 00:25:39.470 lat (usec): min=2482, max=20415, avg=8646.98, stdev=2059.65 00:25:39.470 clat percentiles (usec): 00:25:39.470 | 1.00th=[ 4424], 5.00th=[ 5407], 10.00th=[ 6063], 20.00th=[ 6915], 00:25:39.470 | 30.00th=[ 7504], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9110], 00:25:39.470 | 70.00th=[ 9634], 80.00th=[10421], 90.00th=[11076], 95.00th=[11863], 00:25:39.470 | 99.00th=[14091], 99.50th=[15139], 99.90th=[20317], 99.95th=[20317], 00:25:39.470 | 99.99th=[20317] 00:25:39.470 bw ( KiB/s): min=62400, max=80640, per=49.42%, avg=71040.00, stdev=7576.01, samples=4 00:25:39.470 iops : min= 3900, max= 5040, avg=4440.00, stdev=473.50, samples=4 00:25:39.470 write: IOPS=5067, BW=79.2MiB/s (83.0MB/s)(145MiB/1835msec); 0 zone resets 00:25:39.470 slat (usec): min=40, max=358, avg=41.14, stdev= 7.42 00:25:39.470 clat (usec): min=3317, max=18542, avg=9800.40, stdev=1652.24 00:25:39.470 lat (usec): min=3357, max=18582, avg=9841.54, stdev=1653.45 00:25:39.470 clat percentiles (usec): 00:25:39.470 | 1.00th=[ 6587], 5.00th=[ 7439], 10.00th=[ 7832], 20.00th=[ 8455], 00:25:39.470 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:25:39.470 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11863], 95.00th=[12649], 00:25:39.470 | 99.00th=[14484], 99.50th=[15270], 99.90th=[17957], 99.95th=[18220], 00:25:39.470 | 99.99th=[18482] 00:25:39.470 bw ( KiB/s): min=65312, max=83808, per=91.05%, avg=73824.00, stdev=7791.85, samples=4 00:25:39.470 iops : min= 4082, max= 5238, avg=4614.00, stdev=486.99, samples=4 00:25:39.470 lat (msec) : 4=0.25%, 10=69.40%, 20=30.28%, 50=0.07% 00:25:39.470 cpu : usr=83.91%, sys=13.05%, ctx=18, majf=0, minf=10 00:25:39.470 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:39.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:39.470 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:39.470 issued rwts: total=18051,9299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:39.470 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:39.470 00:25:39.470 Run status group 0 (all jobs): 00:25:39.470 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=282MiB (296MB), run=2009-2009msec 00:25:39.470 WRITE: bw=79.2MiB/s (83.0MB/s), 79.2MiB/s-79.2MiB/s (83.0MB/s-83.0MB/s), io=145MiB (152MB), run=1835-1835msec 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:39.470 rmmod nvme_tcp 00:25:39.470 rmmod nvme_fabrics 00:25:39.470 rmmod nvme_keyring 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1538017 ']' 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1538017 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1538017 ']' 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1538017 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:39.470 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1538017 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1538017' 00:25:39.731 killing process with pid 1538017 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1538017 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1538017 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.731 19:21:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.279 19:21:47 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:42.279 00:25:42.279 real 0m17.207s 00:25:42.279 user 1m5.580s 00:25:42.279 sys 0m7.277s 00:25:42.279 19:21:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:42.279 19:21:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 ************************************ 00:25:42.279 END TEST nvmf_fio_host 00:25:42.279 ************************************ 00:25:42.279 19:21:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:42.279 19:21:47 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:42.279 19:21:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:42.279 19:21:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.279 19:21:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:42.279 ************************************ 00:25:42.279 START TEST nvmf_failover 00:25:42.279 ************************************ 00:25:42.279 19:21:47 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:42.279 * Looking for test storage... 00:25:42.279 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.279 19:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.279 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:42.279 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.279 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.279 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:42.280 19:21:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:48.869 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:25:48.870 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:25:48.870 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:25:48.870 Found net devices under 0000:4b:00.0: cvl_0_0 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:25:48.870 Found net devices under 0000:4b:00.1: cvl_0_1 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:48.870 19:21:54 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:49.130 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:49.130 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:49.130 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:49.130 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:49.130 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:49.130 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:49.130 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:49.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:25:49.130 00:25:49.130 --- 10.0.0.2 ping statistics --- 00:25:49.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.130 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:25:49.130 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:49.131 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.131 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.235 ms 00:25:49.131 00:25:49.131 --- 10.0.0.1 ping statistics --- 00:25:49.131 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.131 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:49.131 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:49.391 19:21:55 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:49.391 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1544003 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1544003 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1544003 ']' 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:49.392 19:21:55 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:49.392 [2024-07-12 19:21:55.356033] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:25:49.392 [2024-07-12 19:21:55.356101] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.392 EAL: No free 2048 kB hugepages reported on node 1 00:25:49.392 [2024-07-12 19:21:55.445157] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:49.652 [2024-07-12 19:21:55.538111] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.652 [2024-07-12 19:21:55.538174] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.652 [2024-07-12 19:21:55.538182] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.652 [2024-07-12 19:21:55.538189] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.652 [2024-07-12 19:21:55.538195] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.652 [2024-07-12 19:21:55.538356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.652 [2024-07-12 19:21:55.538634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:49.652 [2024-07-12 19:21:55.538635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.223 19:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:50.223 19:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:50.223 19:21:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:50.223 19:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:50.223 19:21:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:50.223 19:21:56 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.223 19:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:50.223 [2024-07-12 19:21:56.303978] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.223 19:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:50.483 Malloc0 00:25:50.483 19:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.743 19:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:50.743 19:21:56 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:51.003 [2024-07-12 19:21:56.989650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.003 19:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:51.262 [2024-07-12 19:21:57.158058] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:51.262 19:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:51.262 [2024-07-12 19:21:57.318557] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:51.262 19:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1544371 00:25:51.262 19:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:51.262 19:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:51.262 19:21:57 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1544371 /var/tmp/bdevperf.sock 00:25:51.263 19:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1544371 ']' 00:25:51.263 19:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:51.263 19:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:51.263 19:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:51.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:51.263 19:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:51.263 19:21:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:52.203 19:21:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:52.203 19:21:58 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:52.203 19:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:52.463 NVMe0n1 00:25:52.463 19:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:52.724 00:25:52.724 19:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1544705 00:25:52.724 19:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:52.724 19:21:58 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:53.665 19:21:59 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:53.926 [2024-07-12 19:21:59.902452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902485] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902500] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902509] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902574] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902583] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902591] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902600] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902604] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902617] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902622] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902626] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902635] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902647] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902666] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902683] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902687] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902727] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902736] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 [2024-07-12 19:21:59.902741] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb5d50 is same with the state(5) to be set 00:25:53.926 19:21:59 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:57.221 19:22:02 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:57.221 00:25:57.221 19:22:03 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:57.480 [2024-07-12 19:22:03.393607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393641] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393646] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393651] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393655] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393660] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393686] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393698] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 [2024-07-12 19:22:03.393712] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7480 is same with the state(5) to be set 00:25:57.480 19:22:03 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:00.781 19:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.781 [2024-07-12 19:22:06.571968] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.781 19:22:06 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:01.724 19:22:07 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:01.724 [2024-07-12 19:22:07.750629] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750659] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750670] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750675] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750679] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750710] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750719] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750723] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750728] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750732] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.724 [2024-07-12 19:22:07.750756] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750770] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750775] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750779] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750783] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750806] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 [2024-07-12 19:22:07.750819] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbb7cd0 is same with the state(5) to be set 00:26:01.725 19:22:07 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1544705 00:26:08.320 0 00:26:08.320 19:22:13 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1544371 00:26:08.320 19:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1544371 ']' 00:26:08.320 19:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1544371 00:26:08.320 19:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:08.320 19:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:08.320 19:22:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1544371 00:26:08.320 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:08.320 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:08.320 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1544371' 00:26:08.320 killing process with pid 1544371 00:26:08.320 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1544371 00:26:08.320 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1544371 00:26:08.320 19:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:08.320 [2024-07-12 19:21:57.397023] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:26:08.320 [2024-07-12 19:21:57.397081] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1544371 ] 00:26:08.320 EAL: No free 2048 kB hugepages reported on node 1 00:26:08.320 [2024-07-12 19:21:57.456584] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:08.320 [2024-07-12 19:21:57.520694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:08.320 Running I/O for 15 seconds... 00:26:08.320 [2024-07-12 19:21:59.905711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.320 [2024-07-12 19:21:59.905746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.320 [2024-07-12 19:21:59.905763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.320 [2024-07-12 19:21:59.905771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.320 [2024-07-12 19:21:59.905781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.320 [2024-07-12 19:21:59.905789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.320 [2024-07-12 19:21:59.905798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.320 [2024-07-12 19:21:59.905805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.320 [2024-07-12 19:21:59.905814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.321 [2024-07-12 19:21:59.905821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.321 [2024-07-12 19:21:59.905838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.321 [2024-07-12 19:21:59.905855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.905872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.905889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.905905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.905921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.905944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.905963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.905981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.905990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.905998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.321 [2024-07-12 19:21:59.906532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.321 [2024-07-12 19:21:59.906541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.322 [2024-07-12 19:21:59.906691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100512 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906737] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906743] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100520 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906769] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100528 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100536 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100544 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906843] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906849] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100552 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906875] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100560 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906897] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100568 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906928] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100576 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100584 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.906977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.906983] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.906989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100592 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.906996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907010] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100600 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.907023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907031] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907037] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100000 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.907050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100008 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.907076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100016 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.907102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100024 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.907132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907141] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907147] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100032 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.907159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907167] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907172] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100040 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.907185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907193] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100048 len:8 PRP1 0x0 PRP2 0x0 00:26:08.322 [2024-07-12 19:21:59.907213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.322 [2024-07-12 19:21:59.907220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.322 [2024-07-12 19:21:59.907225] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.322 [2024-07-12 19:21:59.907231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100608 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907246] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907252] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100616 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907272] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100624 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100632 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907325] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100640 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100648 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907378] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907383] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100656 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100664 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907433] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100672 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907458] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100680 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907485] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100688 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100696 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907537] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907542] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100704 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907562] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100712 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907594] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100720 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907614] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907624] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907650] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907697] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907703] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907730] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100760 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907750] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100768 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100776 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907809] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100784 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907831] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907837] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100792 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907862] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100800 len:8 PRP1 0x0 PRP2 0x0 00:26:08.323 [2024-07-12 19:21:59.907876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.323 [2024-07-12 19:21:59.907884] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.323 [2024-07-12 19:21:59.907889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.323 [2024-07-12 19:21:59.907895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100808 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.907902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.907909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.907915] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.907921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100816 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.907928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.907936] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.907941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.907947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100824 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.907954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.907961] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.907966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.907973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100832 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.907981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.907988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.907993] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.907999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100840 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.908005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.908014] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.908019] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.908025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.908034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.908042] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.908047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.908053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100856 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.908061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.908068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.908074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.908080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100864 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918065] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918073] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100872 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100880 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100888 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918166] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918171] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100896 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918192] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918198] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100904 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100912 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918250] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100920 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918276] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100928 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918307] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100936 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100944 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918355] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918360] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100952 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918381] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918387] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100960 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918412] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100056 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.324 [2024-07-12 19:21:59.918436] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.324 [2024-07-12 19:21:59.918441] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.324 [2024-07-12 19:21:59.918447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100064 len:8 PRP1 0x0 PRP2 0x0 00:26:08.324 [2024-07-12 19:21:59.918454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918461] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.325 [2024-07-12 19:21:59.918466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.325 [2024-07-12 19:21:59.918473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100072 len:8 PRP1 0x0 PRP2 0x0 00:26:08.325 [2024-07-12 19:21:59.918480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.325 [2024-07-12 19:21:59.918493] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.325 [2024-07-12 19:21:59.918499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100080 len:8 PRP1 0x0 PRP2 0x0 00:26:08.325 [2024-07-12 19:21:59.918505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.325 [2024-07-12 19:21:59.918518] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.325 [2024-07-12 19:21:59.918525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100088 len:8 PRP1 0x0 PRP2 0x0 00:26:08.325 [2024-07-12 19:21:59.918531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918539] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.325 [2024-07-12 19:21:59.918545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.325 [2024-07-12 19:21:59.918551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100096 len:8 PRP1 0x0 PRP2 0x0 00:26:08.325 [2024-07-12 19:21:59.918559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.325 [2024-07-12 19:21:59.918572] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.325 [2024-07-12 19:21:59.918578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100104 len:8 PRP1 0x0 PRP2 0x0 00:26:08.325 [2024-07-12 19:21:59.918585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.325 [2024-07-12 19:21:59.918598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.325 [2024-07-12 19:21:59.918604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100112 len:8 PRP1 0x0 PRP2 0x0 00:26:08.325 [2024-07-12 19:21:59.918612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918650] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x89c420 was disconnected and freed. reset controller. 00:26:08.325 [2024-07-12 19:21:59.918659] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:08.325 [2024-07-12 19:21:59.918687] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.325 [2024-07-12 19:21:59.918697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.325 [2024-07-12 19:21:59.918714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.325 [2024-07-12 19:21:59.918730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.325 [2024-07-12 19:21:59.918744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:21:59.918752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.325 [2024-07-12 19:21:59.918798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87aef0 (9): Bad file descriptor 00:26:08.325 [2024-07-12 19:21:59.922315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.325 [2024-07-12 19:21:59.955713] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:08.325 [2024-07-12 19:22:03.394145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.325 [2024-07-12 19:22:03.394527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.325 [2024-07-12 19:22:03.394536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:19088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.326 [2024-07-12 19:22:03.394877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.394985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.394992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:19432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:19440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.326 [2024-07-12 19:22:03.395257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.326 [2024-07-12 19:22:03.395266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.327 [2024-07-12 19:22:03.395309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.327 [2024-07-12 19:22:03.395325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.327 [2024-07-12 19:22:03.395342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:19136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.327 [2024-07-12 19:22:03.395359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.327 [2024-07-12 19:22:03.395377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:19152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.327 [2024-07-12 19:22:03.395394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:19560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:19720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.327 [2024-07-12 19:22:03.395921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.327 [2024-07-12 19:22:03.395951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19728 len:8 PRP1 0x0 PRP2 0x0 00:26:08.327 [2024-07-12 19:22:03.395959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.327 [2024-07-12 19:22:03.395974] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.327 [2024-07-12 19:22:03.395980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19736 len:8 PRP1 0x0 PRP2 0x0 00:26:08.327 [2024-07-12 19:22:03.395989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.327 [2024-07-12 19:22:03.395996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396002] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396024] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396030] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19752 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396051] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19760 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19768 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396140] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19784 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19792 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396194] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19800 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396249] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19816 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396269] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19824 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19832 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396328] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396356] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19848 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396376] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396382] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19856 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396403] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19160 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396428] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396436] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19168 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396457] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19176 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396483] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19184 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19192 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.328 [2024-07-12 19:22:03.396566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.328 [2024-07-12 19:22:03.396572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19208 len:8 PRP1 0x0 PRP2 0x0 00:26:08.328 [2024-07-12 19:22:03.396580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.396615] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x89e5e0 was disconnected and freed. reset controller. 00:26:08.328 [2024-07-12 19:22:03.396624] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:08.328 [2024-07-12 19:22:03.396644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.328 [2024-07-12 19:22:03.396652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.406612] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.328 [2024-07-12 19:22:03.406641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.406651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.328 [2024-07-12 19:22:03.406663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.406672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.328 [2024-07-12 19:22:03.406679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.328 [2024-07-12 19:22:03.406687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.328 [2024-07-12 19:22:03.406717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87aef0 (9): Bad file descriptor 00:26:08.329 [2024-07-12 19:22:03.410281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.329 [2024-07-12 19:22:03.442702] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:08.329 [2024-07-12 19:22:07.751243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.329 [2024-07-12 19:22:07.751277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.329 [2024-07-12 19:22:07.751294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:21264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.329 [2024-07-12 19:22:07.751302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.329 [2024-07-12 19:22:07.751313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.329 [2024-07-12 19:22:07.751320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.329 [2024-07-12 19:22:07.751330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.329 [2024-07-12 19:22:07.751337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.329 [2024-07-12 19:22:07.751346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:21288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.329 [2024-07-12 19:22:07.751354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.329 [2024-07-12 19:22:07.751363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.330 [2024-07-12 19:22:07.751371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.330 [2024-07-12 19:22:07.751387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:21624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:21632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:21664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:21688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:21704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:21752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:21768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:21832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.330 [2024-07-12 19:22:07.751861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:21840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.330 [2024-07-12 19:22:07.751868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.751877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:21848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.751884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.751893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.751900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.751909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:21864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.751916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.751925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.751932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.751942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.751949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.751959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.751965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.751974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.751981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.751990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.751998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:21912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:21920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:21944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:21976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.331 [2024-07-12 19:22:07.752167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.331 [2024-07-12 19:22:07.752183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.331 [2024-07-12 19:22:07.752199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.331 [2024-07-12 19:22:07.752215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.331 [2024-07-12 19:22:07.752232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.331 [2024-07-12 19:22:07.752248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.331 [2024-07-12 19:22:07.752264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:08.331 [2024-07-12 19:22:07.752281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:22008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:22016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:22024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.331 [2024-07-12 19:22:07.752571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.331 [2024-07-12 19:22:07.752577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:22128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:22216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:22224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:08.332 [2024-07-12 19:22:07.752789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752811] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.752819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22232 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.752826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752836] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.752842] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.752848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.752855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752862] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.752867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.752873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22248 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.752882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752890] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.752895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.752901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22256 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.752910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.752923] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.752929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22264 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.752936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.752949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.752955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.752962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752969] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.752975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.752982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.752989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.752997] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753003] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21384 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753026] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753033] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21392 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21400 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753088] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21408 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753112] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753118] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21416 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753148] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21424 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21432 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21440 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753234] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21448 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21456 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.332 [2024-07-12 19:22:07.753280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.332 [2024-07-12 19:22:07.753286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.332 [2024-07-12 19:22:07.753291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21464 len:8 PRP1 0x0 PRP2 0x0 00:26:08.332 [2024-07-12 19:22:07.753299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.753306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.753311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.753318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21472 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.753325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.753333] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.753340] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.753346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21480 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.753353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.753361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.753366] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.753372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21488 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.753379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.753387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.753392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.753398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21496 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.753405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.753412] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.753418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.753424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21504 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.753432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.753440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.753445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.753451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21512 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.753458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.753466] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.753471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.753477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21520 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.753484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.753492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.753498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21528 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763640] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763646] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21544 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763667] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21552 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763693] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763698] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21560 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21568 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763751] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21576 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763777] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21584 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21592 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763824] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763830] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21600 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763854] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:08.333 [2024-07-12 19:22:07.763860] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:08.333 [2024-07-12 19:22:07.763866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21608 len:8 PRP1 0x0 PRP2 0x0 00:26:08.333 [2024-07-12 19:22:07.763873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763913] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x89ee50 was disconnected and freed. reset controller. 00:26:08.333 [2024-07-12 19:22:07.763923] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:08.333 [2024-07-12 19:22:07.763950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.333 [2024-07-12 19:22:07.763960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.333 [2024-07-12 19:22:07.763976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.763985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.333 [2024-07-12 19:22:07.763992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.764000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:08.333 [2024-07-12 19:22:07.764007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:08.333 [2024-07-12 19:22:07.764014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:08.333 [2024-07-12 19:22:07.764054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x87aef0 (9): Bad file descriptor 00:26:08.333 [2024-07-12 19:22:07.767595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:08.333 [2024-07-12 19:22:07.937686] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:08.333 00:26:08.333 Latency(us) 00:26:08.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.333 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:08.333 Verification LBA range: start 0x0 length 0x4000 00:26:08.333 NVMe0n1 : 15.05 11218.09 43.82 558.90 0.00 10821.65 802.13 45875.20 00:26:08.333 =================================================================================================================== 00:26:08.333 Total : 11218.09 43.82 558.90 0.00 10821.65 802.13 45875.20 00:26:08.333 Received shutdown signal, test time was about 15.000000 seconds 00:26:08.333 00:26:08.333 Latency(us) 00:26:08.333 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:08.333 =================================================================================================================== 00:26:08.333 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:08.333 19:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:08.333 19:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1547714 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1547714 /var/tmp/bdevperf.sock 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1547714 ']' 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:08.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:08.334 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:08.912 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:08.912 19:22:14 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:08.912 19:22:14 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:09.173 [2024-07-12 19:22:15.107434] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:09.173 19:22:15 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:09.173 [2024-07-12 19:22:15.275800] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:09.433 19:22:15 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:09.433 NVMe0n1 00:26:09.433 19:22:15 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:09.695 00:26:09.695 19:22:15 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.268 00:26:10.268 19:22:16 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.268 19:22:16 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:10.268 19:22:16 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:10.528 19:22:16 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:13.831 19:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:13.831 19:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:13.831 19:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1548737 00:26:13.831 19:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:13.831 19:22:19 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1548737 00:26:14.773 0 00:26:14.773 19:22:20 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:14.773 [2024-07-12 19:22:14.195534] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:26:14.773 [2024-07-12 19:22:14.195592] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1547714 ] 00:26:14.773 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.773 [2024-07-12 19:22:14.254145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.773 [2024-07-12 19:22:14.317938] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.773 [2024-07-12 19:22:16.485935] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:14.773 [2024-07-12 19:22:16.485982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.773 [2024-07-12 19:22:16.485994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.773 [2024-07-12 19:22:16.486004] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.773 [2024-07-12 19:22:16.486011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.773 [2024-07-12 19:22:16.486019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.773 [2024-07-12 19:22:16.486027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.773 [2024-07-12 19:22:16.486034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.773 [2024-07-12 19:22:16.486044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.773 [2024-07-12 19:22:16.486051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:14.773 [2024-07-12 19:22:16.486080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:14.773 [2024-07-12 19:22:16.486095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11a8ef0 (9): Bad file descriptor 00:26:14.773 [2024-07-12 19:22:16.497432] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:14.773 Running I/O for 1 seconds... 00:26:14.773 00:26:14.773 Latency(us) 00:26:14.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.773 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:14.773 Verification LBA range: start 0x0 length 0x4000 00:26:14.773 NVMe0n1 : 1.01 11229.39 43.86 0.00 0.00 11344.81 2553.17 10485.76 00:26:14.773 =================================================================================================================== 00:26:14.773 Total : 11229.39 43.86 0.00 0.00 11344.81 2553.17 10485.76 00:26:14.773 19:22:20 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:14.773 19:22:20 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:15.034 19:22:20 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.034 19:22:21 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.034 19:22:21 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:15.334 19:22:21 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:15.656 19:22:21 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1547714 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1547714 ']' 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1547714 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1547714 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1547714' 00:26:18.968 killing process with pid 1547714 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1547714 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1547714 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:18.968 19:22:24 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.968 rmmod nvme_tcp 00:26:18.968 rmmod nvme_fabrics 00:26:18.968 rmmod nvme_keyring 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1544003 ']' 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1544003 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1544003 ']' 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1544003 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:18.968 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1544003 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1544003' 00:26:19.229 killing process with pid 1544003 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1544003 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1544003 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:19.229 19:22:25 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.777 19:22:27 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:21.777 00:26:21.777 real 0m39.392s 00:26:21.777 user 2m0.042s 00:26:21.777 sys 0m8.715s 00:26:21.777 19:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:21.777 19:22:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:21.777 ************************************ 00:26:21.777 END TEST nvmf_failover 00:26:21.777 ************************************ 00:26:21.777 19:22:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:21.777 19:22:27 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:21.777 19:22:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:21.777 19:22:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:21.777 19:22:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:21.777 ************************************ 00:26:21.777 START TEST nvmf_host_discovery 00:26:21.777 ************************************ 00:26:21.777 19:22:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:21.777 * Looking for test storage... 00:26:21.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:21.777 19:22:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:21.777 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:21.777 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.777 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.777 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.777 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.777 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:21.778 19:22:27 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:28.369 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:28.369 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:28.369 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:28.369 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:28.369 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:28.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:28.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:26:28.629 00:26:28.629 --- 10.0.0.2 ping statistics --- 00:26:28.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.629 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:28.629 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:28.629 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:26:28.629 00:26:28.629 --- 10.0.0.1 ping statistics --- 00:26:28.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:28.629 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1554057 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1554057 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1554057 ']' 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:28.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:28.629 19:22:34 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.889 [2024-07-12 19:22:34.809602] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:26:28.889 [2024-07-12 19:22:34.809666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:28.889 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.889 [2024-07-12 19:22:34.896372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.889 [2024-07-12 19:22:34.989758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:28.889 [2024-07-12 19:22:34.989814] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:28.889 [2024-07-12 19:22:34.989822] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:28.889 [2024-07-12 19:22:34.989829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:28.889 [2024-07-12 19:22:34.989835] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:28.889 [2024-07-12 19:22:34.989860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.831 [2024-07-12 19:22:35.650934] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.831 [2024-07-12 19:22:35.663172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.831 null0 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.831 null1 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1554094 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1554094 /tmp/host.sock 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1554094 ']' 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:29.831 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:29.831 19:22:35 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.831 [2024-07-12 19:22:35.756615] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:26:29.831 [2024-07-12 19:22:35.756676] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1554094 ] 00:26:29.831 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.831 [2024-07-12 19:22:35.820332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:29.831 [2024-07-12 19:22:35.895802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.403 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:30.403 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:30.403 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:30.403 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:30.403 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.403 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.663 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.663 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:30.663 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.663 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.663 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:30.664 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.925 [2024-07-12 19:22:36.894292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:30.925 19:22:36 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:30.925 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:31.186 19:22:37 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:31.757 [2024-07-12 19:22:37.591299] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:31.757 [2024-07-12 19:22:37.591320] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:31.757 [2024-07-12 19:22:37.591334] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:31.757 [2024-07-12 19:22:37.680614] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:31.757 [2024-07-12 19:22:37.864572] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:31.757 [2024-07-12 19:22:37.864595] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:32.018 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.279 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.540 [2024-07-12 19:22:38.446290] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:32.540 [2024-07-12 19:22:38.446987] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:32.540 [2024-07-12 19:22:38.447015] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.540 [2024-07-12 19:22:38.537294] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.540 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:32.541 19:22:38 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:32.801 [2024-07-12 19:22:38.803629] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:32.801 [2024-07-12 19:22:38.803647] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:32.801 [2024-07-12 19:22:38.803653] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.744 [2024-07-12 19:22:39.718271] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:33.744 [2024-07-12 19:22:39.718292] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:33.744 [2024-07-12 19:22:39.726275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.744 [2024-07-12 19:22:39.726295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.744 [2024-07-12 19:22:39.726305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.744 [2024-07-12 19:22:39.726313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.744 [2024-07-12 19:22:39.726321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:33.744 [2024-07-12 19:22:39.726328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.744 [2024-07-12 19:22:39.726341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:33.744 [2024-07-12 19:22:39.726347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:33.744 [2024-07-12 19:22:39.726355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdea9f0 is same with the state(5) to be set 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:33.744 [2024-07-12 19:22:39.736287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea9f0 (9): Bad file descriptor 00:26:33.744 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.744 [2024-07-12 19:22:39.746324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.744 [2024-07-12 19:22:39.746723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.744 [2024-07-12 19:22:39.746739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdea9f0 with addr=10.0.0.2, port=4420 00:26:33.744 [2024-07-12 19:22:39.746747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdea9f0 is same with the state(5) to be set 00:26:33.744 [2024-07-12 19:22:39.746760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea9f0 (9): Bad file descriptor 00:26:33.744 [2024-07-12 19:22:39.746778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:33.744 [2024-07-12 19:22:39.746786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:33.744 [2024-07-12 19:22:39.746794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:33.744 [2024-07-12 19:22:39.746805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.744 [2024-07-12 19:22:39.756381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.744 [2024-07-12 19:22:39.756799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.744 [2024-07-12 19:22:39.756812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdea9f0 with addr=10.0.0.2, port=4420 00:26:33.744 [2024-07-12 19:22:39.756820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdea9f0 is same with the state(5) to be set 00:26:33.744 [2024-07-12 19:22:39.756831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea9f0 (9): Bad file descriptor 00:26:33.744 [2024-07-12 19:22:39.756848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:33.744 [2024-07-12 19:22:39.756854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:33.744 [2024-07-12 19:22:39.756861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:33.744 [2024-07-12 19:22:39.756872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.744 [2024-07-12 19:22:39.766433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.744 [2024-07-12 19:22:39.766857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.744 [2024-07-12 19:22:39.766871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdea9f0 with addr=10.0.0.2, port=4420 00:26:33.744 [2024-07-12 19:22:39.766878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdea9f0 is same with the state(5) to be set 00:26:33.744 [2024-07-12 19:22:39.766889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea9f0 (9): Bad file descriptor 00:26:33.744 [2024-07-12 19:22:39.766906] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:33.744 [2024-07-12 19:22:39.766913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:33.744 [2024-07-12 19:22:39.766920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:33.744 [2024-07-12 19:22:39.766930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.744 [2024-07-12 19:22:39.776489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.744 [2024-07-12 19:22:39.776906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.744 [2024-07-12 19:22:39.776919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdea9f0 with addr=10.0.0.2, port=4420 00:26:33.744 [2024-07-12 19:22:39.776926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdea9f0 is same with the state(5) to be set 00:26:33.744 [2024-07-12 19:22:39.776937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea9f0 (9): Bad file descriptor 00:26:33.744 [2024-07-12 19:22:39.776954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:33.744 [2024-07-12 19:22:39.776961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:33.744 [2024-07-12 19:22:39.776968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:33.745 [2024-07-12 19:22:39.776978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.745 [2024-07-12 19:22:39.786542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.745 [2024-07-12 19:22:39.786870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.745 [2024-07-12 19:22:39.786882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdea9f0 with addr=10.0.0.2, port=4420 00:26:33.745 [2024-07-12 19:22:39.786890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdea9f0 is same with the state(5) to be set 00:26:33.745 [2024-07-12 19:22:39.786901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea9f0 (9): Bad file descriptor 00:26:33.745 [2024-07-12 19:22:39.786911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:33.745 [2024-07-12 19:22:39.786917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:33.745 [2024-07-12 19:22:39.786924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:33.745 [2024-07-12 19:22:39.786935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.745 [2024-07-12 19:22:39.796596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.745 [2024-07-12 19:22:39.796955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.745 [2024-07-12 19:22:39.796971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdea9f0 with addr=10.0.0.2, port=4420 00:26:33.745 [2024-07-12 19:22:39.796979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdea9f0 is same with the state(5) to be set 00:26:33.745 [2024-07-12 19:22:39.796990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea9f0 (9): Bad file descriptor 00:26:33.745 [2024-07-12 19:22:39.797000] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:33.745 [2024-07-12 19:22:39.797006] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:33.745 [2024-07-12 19:22:39.797013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:33.745 [2024-07-12 19:22:39.797024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.745 [2024-07-12 19:22:39.806650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:33.745 [2024-07-12 19:22:39.807068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.745 [2024-07-12 19:22:39.807080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdea9f0 with addr=10.0.0.2, port=4420 00:26:33.745 [2024-07-12 19:22:39.807087] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdea9f0 is same with the state(5) to be set 00:26:33.745 [2024-07-12 19:22:39.807097] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdea9f0 (9): Bad file descriptor 00:26:33.745 [2024-07-12 19:22:39.807114] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:33.745 [2024-07-12 19:22:39.807120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:33.745 [2024-07-12 19:22:39.807135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:33.745 [2024-07-12 19:22:39.807145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:33.745 [2024-07-12 19:22:39.808117] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:33.745 [2024-07-12 19:22:39.808137] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:33.745 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:34.028 19:22:39 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.028 19:22:40 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.411 [2024-07-12 19:22:41.169331] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:35.411 [2024-07-12 19:22:41.169349] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:35.411 [2024-07-12 19:22:41.169361] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:35.411 [2024-07-12 19:22:41.296770] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:35.411 [2024-07-12 19:22:41.405707] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:35.411 [2024-07-12 19:22:41.405741] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:35.411 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.411 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:35.411 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.412 request: 00:26:35.412 { 00:26:35.412 "name": "nvme", 00:26:35.412 "trtype": "tcp", 00:26:35.412 "traddr": "10.0.0.2", 00:26:35.412 "adrfam": "ipv4", 00:26:35.412 "trsvcid": "8009", 00:26:35.412 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:35.412 "wait_for_attach": true, 00:26:35.412 "method": "bdev_nvme_start_discovery", 00:26:35.412 "req_id": 1 00:26:35.412 } 00:26:35.412 Got JSON-RPC error response 00:26:35.412 response: 00:26:35.412 { 00:26:35.412 "code": -17, 00:26:35.412 "message": "File exists" 00:26:35.412 } 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.412 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.672 request: 00:26:35.672 { 00:26:35.672 "name": "nvme_second", 00:26:35.672 "trtype": "tcp", 00:26:35.672 "traddr": "10.0.0.2", 00:26:35.672 "adrfam": "ipv4", 00:26:35.672 "trsvcid": "8009", 00:26:35.672 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:35.672 "wait_for_attach": true, 00:26:35.672 "method": "bdev_nvme_start_discovery", 00:26:35.672 "req_id": 1 00:26:35.672 } 00:26:35.672 Got JSON-RPC error response 00:26:35.672 response: 00:26:35.672 { 00:26:35.672 "code": -17, 00:26:35.672 "message": "File exists" 00:26:35.672 } 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:35.672 19:22:41 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.612 [2024-07-12 19:22:42.673361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:36.612 [2024-07-12 19:22:42.673400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28e40 with addr=10.0.0.2, port=8010 00:26:36.612 [2024-07-12 19:22:42.673415] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:36.613 [2024-07-12 19:22:42.673422] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:36.613 [2024-07-12 19:22:42.673430] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:37.554 [2024-07-12 19:22:43.675624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.554 [2024-07-12 19:22:43.675647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe28e40 with addr=10.0.0.2, port=8010 00:26:37.554 [2024-07-12 19:22:43.675658] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:37.554 [2024-07-12 19:22:43.675665] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:37.554 [2024-07-12 19:22:43.675671] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:38.938 [2024-07-12 19:22:44.677579] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:38.938 request: 00:26:38.938 { 00:26:38.938 "name": "nvme_second", 00:26:38.938 "trtype": "tcp", 00:26:38.938 "traddr": "10.0.0.2", 00:26:38.938 "adrfam": "ipv4", 00:26:38.938 "trsvcid": "8010", 00:26:38.938 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:38.938 "wait_for_attach": false, 00:26:38.938 "attach_timeout_ms": 3000, 00:26:38.938 "method": "bdev_nvme_start_discovery", 00:26:38.938 "req_id": 1 00:26:38.938 } 00:26:38.938 Got JSON-RPC error response 00:26:38.938 response: 00:26:38.938 { 00:26:38.938 "code": -110, 00:26:38.938 "message": "Connection timed out" 00:26:38.938 } 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1554094 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:38.938 rmmod nvme_tcp 00:26:38.938 rmmod nvme_fabrics 00:26:38.938 rmmod nvme_keyring 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1554057 ']' 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1554057 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1554057 ']' 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1554057 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:38.938 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1554057 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1554057' 00:26:38.939 killing process with pid 1554057 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1554057 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1554057 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:38.939 19:22:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:41.486 00:26:41.486 real 0m19.623s 00:26:41.486 user 0m23.004s 00:26:41.486 sys 0m6.785s 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:41.486 ************************************ 00:26:41.486 END TEST nvmf_host_discovery 00:26:41.486 ************************************ 00:26:41.486 19:22:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:41.486 19:22:47 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:41.486 19:22:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:41.486 19:22:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:41.486 19:22:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:41.486 ************************************ 00:26:41.486 START TEST nvmf_host_multipath_status 00:26:41.486 ************************************ 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:41.486 * Looking for test storage... 00:26:41.486 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:41.486 19:22:47 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:48.129 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:26:48.130 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:26:48.130 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:26:48.130 Found net devices under 0000:4b:00.0: cvl_0_0 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:26:48.130 Found net devices under 0000:4b:00.1: cvl_0_1 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:48.130 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:48.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:48.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:26:48.390 00:26:48.390 --- 10.0.0.2 ping statistics --- 00:26:48.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.390 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:48.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:48.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:26:48.390 00:26:48.390 --- 10.0.0.1 ping statistics --- 00:26:48.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:48.390 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1560262 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1560262 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1560262 ']' 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:48.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:48.390 19:22:54 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:48.651 [2024-07-12 19:22:54.529057] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:26:48.651 [2024-07-12 19:22:54.529108] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:48.651 EAL: No free 2048 kB hugepages reported on node 1 00:26:48.651 [2024-07-12 19:22:54.596281] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:48.651 [2024-07-12 19:22:54.661756] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.651 [2024-07-12 19:22:54.661793] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.651 [2024-07-12 19:22:54.661801] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.652 [2024-07-12 19:22:54.661808] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.652 [2024-07-12 19:22:54.661813] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.652 [2024-07-12 19:22:54.661962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.652 [2024-07-12 19:22:54.661964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:49.222 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:49.222 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:49.222 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:49.222 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:49.222 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:49.222 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:49.222 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1560262 00:26:49.222 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:49.483 [2024-07-12 19:22:55.465380] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:49.483 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:49.744 Malloc0 00:26:49.744 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:49.744 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:50.005 19:22:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:50.005 [2024-07-12 19:22:56.101943] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:50.005 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:50.266 [2024-07-12 19:22:56.254282] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1560622 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1560622 /var/tmp/bdevperf.sock 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1560622 ']' 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:50.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:50.266 19:22:56 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:51.209 19:22:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:51.209 19:22:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:51.209 19:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:51.209 19:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:51.780 Nvme0n1 00:26:51.780 19:22:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:52.041 Nvme0n1 00:26:52.041 19:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:52.041 19:22:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:54.592 19:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:54.593 19:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:54.593 19:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:54.593 19:23:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:55.536 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:55.536 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:55.536 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.536 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:55.536 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.536 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:55.536 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:55.536 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.797 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.797 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:55.797 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.797 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:56.059 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.059 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:56.059 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.059 19:23:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:56.059 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.059 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:56.059 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.059 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:56.321 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.321 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:56.321 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.321 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.583 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.583 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:56.583 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:56.583 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:56.843 19:23:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:57.784 19:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:57.784 19:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:57.784 19:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.784 19:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.045 19:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.045 19:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:58.045 19:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.045 19:23:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.045 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.045 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.045 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.045 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:58.306 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.306 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:58.306 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.306 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:58.567 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.567 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:58.567 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.567 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:58.567 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.567 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:58.567 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.567 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:58.828 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.828 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:58.828 19:23:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:59.088 19:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:59.089 19:23:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.472 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:00.767 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.767 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:00.767 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.767 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:00.767 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.767 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:00.767 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.767 19:23:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:01.027 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.027 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:01.027 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.027 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:01.291 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.291 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:01.291 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:01.291 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:01.552 19:23:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:02.492 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:02.492 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:02.492 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.492 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:02.752 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.752 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:02.752 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.752 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:03.013 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:03.013 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:03.013 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.013 19:23:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:03.013 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.013 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:03.013 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.013 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:03.274 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.274 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:03.274 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:03.274 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.535 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:03.535 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:03.535 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:03.535 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:03.535 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:03.535 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:03.535 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:03.795 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:03.795 19:23:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:05.183 19:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:05.183 19:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:05.183 19:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.183 19:23:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:05.183 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.183 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:05.183 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.183 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.183 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.183 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.183 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.183 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.444 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.444 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.444 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.444 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:05.444 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.444 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:05.705 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:05.705 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.705 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.705 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:05.705 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.705 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:05.966 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.966 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:05.966 19:23:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:05.966 19:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:06.227 19:23:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:07.167 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:07.167 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:07.167 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.167 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:07.427 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.427 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:07.427 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.427 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:07.688 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.688 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:07.688 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.688 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.688 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.688 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:07.688 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.688 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.948 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.948 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:07.948 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.948 19:23:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:08.209 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.209 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:08.209 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.209 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:08.209 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.209 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:08.471 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:08.471 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:08.732 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:08.733 19:23:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:09.675 19:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:09.675 19:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:09.675 19:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.675 19:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:09.937 19:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.937 19:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:09.937 19:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.937 19:23:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:10.198 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.198 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:10.198 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.198 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:10.198 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.198 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:10.198 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.198 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.460 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.460 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:10.460 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:10.460 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.722 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.722 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:10.722 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.722 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:10.722 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.722 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:10.722 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:10.983 19:23:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:11.243 19:23:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:12.187 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:12.187 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:12.187 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.187 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:12.187 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.187 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:12.187 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.187 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:12.448 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.448 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:12.448 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.448 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.709 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.709 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.709 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.709 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.709 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.709 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.709 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.709 19:23:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.969 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.969 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:12.969 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.969 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.231 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.231 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:13.231 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:13.231 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:13.493 19:23:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:14.438 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:14.438 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:14.438 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.438 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:14.703 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.703 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:14.703 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.703 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.001 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.001 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.001 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.001 19:23:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.001 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.001 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.001 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.001 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.290 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.290 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.290 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.290 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.290 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.290 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:15.290 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.290 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:15.554 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.554 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:15.554 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:15.816 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:15.816 19:23:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:16.763 19:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:16.763 19:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:16.763 19:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.763 19:23:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.025 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.025 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:17.025 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.025 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:17.289 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.289 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:17.289 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.289 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:17.289 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.289 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:17.289 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:17.289 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.551 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.551 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:17.551 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:17.551 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1560622 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1560622 ']' 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1560622 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1560622 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1560622' 00:27:17.812 killing process with pid 1560622 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1560622 00:27:17.812 19:23:23 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1560622 00:27:18.075 Connection closed with partial response: 00:27:18.075 00:27:18.075 00:27:18.075 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1560622 00:27:18.075 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:18.075 [2024-07-12 19:22:56.317756] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:27:18.075 [2024-07-12 19:22:56.317814] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1560622 ] 00:27:18.075 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.075 [2024-07-12 19:22:56.367801] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:18.075 [2024-07-12 19:22:56.420007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.075 Running I/O for 90 seconds... 00:27:18.075 [2024-07-12 19:23:09.722259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:62144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.075 [2024-07-12 19:23:09.722293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.722833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.722838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.723664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.723671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.723684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.075 [2024-07-12 19:23:09.723689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:18.075 [2024-07-12 19:23:09.723702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:62152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.076 [2024-07-12 19:23:09.723905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:62160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.076 [2024-07-12 19:23:09.723923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.723989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.723994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:62472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:62488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:62552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:62592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.076 [2024-07-12 19:23:09.724804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:18.076 [2024-07-12 19:23:09.724820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.724826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.724841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:62624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.724846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.724861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:62632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.724866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.724881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.724886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.724901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:62648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.724906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.724952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.724958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.724975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.724980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.724996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:62672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:62680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:62720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:62824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:09.725511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:09.725516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:130480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:130496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:130512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:130528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:130544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:130592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:130608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.077 [2024-07-12 19:23:21.846830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:18.077 [2024-07-12 19:23:21.846841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.077 [2024-07-12 19:23:21.846846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.846857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.078 [2024-07-12 19:23:21.846862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.846872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.078 [2024-07-12 19:23:21.846877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.846887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.078 [2024-07-12 19:23:21.846892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.846902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.846908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.846918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:130640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.846923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.846934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:130656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.846939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.846949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.846955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.847315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:18.078 [2024-07-12 19:23:21.847329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.847340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.847345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.847355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.847360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.847371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.847376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.847386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.847391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.847401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.847406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:18.078 [2024-07-12 19:23:21.847416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:18.078 [2024-07-12 19:23:21.847421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:18.078 Received shutdown signal, test time was about 25.668883 seconds 00:27:18.078 00:27:18.078 Latency(us) 00:27:18.078 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:18.078 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:18.078 Verification LBA range: start 0x0 length 0x4000 00:27:18.078 Nvme0n1 : 25.67 11011.47 43.01 0.00 0.00 11605.97 276.48 3019898.88 00:27:18.078 =================================================================================================================== 00:27:18.078 Total : 11011.47 43.01 0.00 0.00 11605.97 276.48 3019898.88 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:18.078 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:18.338 rmmod nvme_tcp 00:27:18.338 rmmod nvme_fabrics 00:27:18.338 rmmod nvme_keyring 00:27:18.338 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:18.338 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:18.338 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:18.338 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1560262 ']' 00:27:18.338 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1560262 00:27:18.338 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1560262 ']' 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1560262 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1560262 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1560262' 00:27:18.339 killing process with pid 1560262 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1560262 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1560262 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:18.339 19:23:24 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.887 19:23:26 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:20.887 00:27:20.887 real 0m39.406s 00:27:20.887 user 1m41.674s 00:27:20.887 sys 0m10.697s 00:27:20.887 19:23:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.887 19:23:26 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:20.887 ************************************ 00:27:20.887 END TEST nvmf_host_multipath_status 00:27:20.887 ************************************ 00:27:20.887 19:23:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:20.887 19:23:26 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:20.887 19:23:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:20.887 19:23:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.887 19:23:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:20.887 ************************************ 00:27:20.887 START TEST nvmf_discovery_remove_ifc 00:27:20.887 ************************************ 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:20.887 * Looking for test storage... 00:27:20.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:20.887 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:20.888 19:23:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:27.475 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:27.475 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.475 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:27.476 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:27.476 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:27.476 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:27.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:27.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.586 ms 00:27:27.737 00:27:27.737 --- 10.0.0.2 ping statistics --- 00:27:27.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.737 rtt min/avg/max/mdev = 0.586/0.586/0.586/0.000 ms 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:27.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:27.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:27:27.737 00:27:27.737 --- 10.0.0.1 ping statistics --- 00:27:27.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:27.737 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1570265 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1570265 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1570265 ']' 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:27.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:27.737 19:23:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:27.737 [2024-07-12 19:23:33.849977] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:27:27.737 [2024-07-12 19:23:33.850041] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:27.998 EAL: No free 2048 kB hugepages reported on node 1 00:27:27.998 [2024-07-12 19:23:33.937824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:27.998 [2024-07-12 19:23:34.030911] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:27.998 [2024-07-12 19:23:34.030965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:27.998 [2024-07-12 19:23:34.030973] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:27.998 [2024-07-12 19:23:34.030980] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:27.998 [2024-07-12 19:23:34.030986] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:27.998 [2024-07-12 19:23:34.031012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.570 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.570 [2024-07-12 19:23:34.692337] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.831 [2024-07-12 19:23:34.700555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:28.831 null0 00:27:28.831 [2024-07-12 19:23:34.732515] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1570531 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1570531 /tmp/host.sock 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1570531 ']' 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:28.831 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.831 19:23:34 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.831 [2024-07-12 19:23:34.816003] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:27:28.831 [2024-07-12 19:23:34.816073] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1570531 ] 00:27:28.831 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.831 [2024-07-12 19:23:34.880033] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.831 [2024-07-12 19:23:34.954424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.775 19:23:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.715 [2024-07-12 19:23:36.701182] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:30.715 [2024-07-12 19:23:36.701203] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:30.715 [2024-07-12 19:23:36.701216] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.715 [2024-07-12 19:23:36.829625] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:30.975 [2024-07-12 19:23:37.016591] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:30.975 [2024-07-12 19:23:37.016640] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:30.975 [2024-07-12 19:23:37.016663] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:30.975 [2024-07-12 19:23:37.016677] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:30.975 [2024-07-12 19:23:37.016703] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:30.975 [2024-07-12 19:23:37.020593] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x20097e0 was disconnected and freed. delete nvme_qpair. 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:30.975 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:31.236 19:23:37 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:32.176 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.176 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.176 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.176 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:32.176 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.176 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.176 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.176 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.435 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:32.435 19:23:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:33.376 19:23:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:34.315 19:23:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:35.698 19:23:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:36.640 [2024-07-12 19:23:42.456932] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:36.640 [2024-07-12 19:23:42.456976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.640 [2024-07-12 19:23:42.456987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.640 [2024-07-12 19:23:42.456997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.640 [2024-07-12 19:23:42.457005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.640 [2024-07-12 19:23:42.457012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.640 [2024-07-12 19:23:42.457020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.640 [2024-07-12 19:23:42.457028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.640 [2024-07-12 19:23:42.457039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.640 [2024-07-12 19:23:42.457048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.640 [2024-07-12 19:23:42.457056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.640 [2024-07-12 19:23:42.457063] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd0080 is same with the state(5) to be set 00:27:36.640 [2024-07-12 19:23:42.466952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd0080 (9): Bad file descriptor 00:27:36.640 [2024-07-12 19:23:42.476994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.640 19:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.640 19:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.640 19:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.640 19:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.640 19:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.640 19:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.640 19:23:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:37.581 [2024-07-12 19:23:43.497148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:37.581 [2024-07-12 19:23:43.497186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fd0080 with addr=10.0.0.2, port=4420 00:27:37.581 [2024-07-12 19:23:43.497198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fd0080 is same with the state(5) to be set 00:27:37.581 [2024-07-12 19:23:43.497222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fd0080 (9): Bad file descriptor 00:27:37.581 [2024-07-12 19:23:43.497591] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.581 [2024-07-12 19:23:43.497609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:37.581 [2024-07-12 19:23:43.497616] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:37.581 [2024-07-12 19:23:43.497625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:37.581 [2024-07-12 19:23:43.497641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.581 [2024-07-12 19:23:43.497648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:37.581 19:23:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.581 19:23:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:37.581 19:23:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.522 [2024-07-12 19:23:44.500024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.522 [2024-07-12 19:23:44.500042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.522 [2024-07-12 19:23:44.500050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.522 [2024-07-12 19:23:44.500056] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:38.522 [2024-07-12 19:23:44.500068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.522 [2024-07-12 19:23:44.500086] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:38.522 [2024-07-12 19:23:44.500106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.522 [2024-07-12 19:23:44.500121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.522 [2024-07-12 19:23:44.500135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.522 [2024-07-12 19:23:44.500143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.522 [2024-07-12 19:23:44.500151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.522 [2024-07-12 19:23:44.500158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.522 [2024-07-12 19:23:44.500166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.522 [2024-07-12 19:23:44.500174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.523 [2024-07-12 19:23:44.500182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.523 [2024-07-12 19:23:44.500190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.523 [2024-07-12 19:23:44.500198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:38.523 [2024-07-12 19:23:44.500644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fcf4c0 (9): Bad file descriptor 00:27:38.523 [2024-07-12 19:23:44.501656] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:38.523 [2024-07-12 19:23:44.501667] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.523 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:38.784 19:23:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:39.726 19:23:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:40.668 [2024-07-12 19:23:46.557372] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:40.668 [2024-07-12 19:23:46.557389] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:40.668 [2024-07-12 19:23:46.557402] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:40.668 [2024-07-12 19:23:46.686803] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:40.668 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:40.668 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.668 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:40.668 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.668 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:40.668 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.668 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:40.668 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.928 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:40.928 19:23:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:40.928 [2024-07-12 19:23:46.868214] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:40.928 [2024-07-12 19:23:46.868255] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:40.928 [2024-07-12 19:23:46.868274] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:40.928 [2024-07-12 19:23:46.868288] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:40.928 [2024-07-12 19:23:46.868296] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:40.928 [2024-07-12 19:23:46.874696] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1fe64b0 was disconnected and freed. delete nvme_qpair. 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1570531 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1570531 ']' 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1570531 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1570531 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1570531' 00:27:41.869 killing process with pid 1570531 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1570531 00:27:41.869 19:23:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1570531 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:42.130 rmmod nvme_tcp 00:27:42.130 rmmod nvme_fabrics 00:27:42.130 rmmod nvme_keyring 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1570265 ']' 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1570265 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1570265 ']' 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1570265 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1570265 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1570265' 00:27:42.130 killing process with pid 1570265 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1570265 00:27:42.130 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1570265 00:27:42.391 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:42.391 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:42.391 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:42.391 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:42.391 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:42.391 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:42.391 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:42.391 19:23:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.305 19:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:44.305 00:27:44.305 real 0m23.752s 00:27:44.305 user 0m29.186s 00:27:44.305 sys 0m6.608s 00:27:44.305 19:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:44.305 19:23:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.305 ************************************ 00:27:44.305 END TEST nvmf_discovery_remove_ifc 00:27:44.305 ************************************ 00:27:44.305 19:23:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:44.305 19:23:50 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:44.305 19:23:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:44.305 19:23:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:44.305 19:23:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.567 ************************************ 00:27:44.567 START TEST nvmf_identify_kernel_target 00:27:44.567 ************************************ 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:44.567 * Looking for test storage... 00:27:44.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:44.567 19:23:50 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:27:51.239 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:27:51.239 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:27:51.239 Found net devices under 0000:4b:00.0: cvl_0_0 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:27:51.239 Found net devices under 0000:4b:00.1: cvl_0_1 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.239 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:51.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.519 ms 00:27:51.500 00:27:51.500 --- 10.0.0.2 ping statistics --- 00:27:51.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.500 rtt min/avg/max/mdev = 0.519/0.519/0.519/0.000 ms 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.325 ms 00:27:51.500 00:27:51.500 --- 10.0.0.1 ping statistics --- 00:27:51.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.500 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:51.500 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:51.761 19:23:57 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:55.064 Waiting for block devices as requested 00:27:55.064 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:55.064 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:55.064 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:55.325 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:55.325 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:55.325 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:55.325 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:55.586 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:55.586 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:55.848 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:55.848 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:55.848 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:56.108 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:56.108 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:56.108 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:56.108 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:56.368 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:56.629 No valid GPT data, bailing 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:56.629 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:56.630 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:56.630 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:56.630 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:56.630 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:56.630 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:56.630 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:56.630 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:27:56.630 00:27:56.630 Discovery Log Number of Records 2, Generation counter 2 00:27:56.630 =====Discovery Log Entry 0====== 00:27:56.630 trtype: tcp 00:27:56.630 adrfam: ipv4 00:27:56.630 subtype: current discovery subsystem 00:27:56.630 treq: not specified, sq flow control disable supported 00:27:56.630 portid: 1 00:27:56.630 trsvcid: 4420 00:27:56.630 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:56.630 traddr: 10.0.0.1 00:27:56.630 eflags: none 00:27:56.630 sectype: none 00:27:56.630 =====Discovery Log Entry 1====== 00:27:56.630 trtype: tcp 00:27:56.630 adrfam: ipv4 00:27:56.630 subtype: nvme subsystem 00:27:56.630 treq: not specified, sq flow control disable supported 00:27:56.630 portid: 1 00:27:56.630 trsvcid: 4420 00:27:56.630 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:56.630 traddr: 10.0.0.1 00:27:56.630 eflags: none 00:27:56.630 sectype: none 00:27:56.630 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:56.630 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:56.892 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.892 ===================================================== 00:27:56.892 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:56.892 ===================================================== 00:27:56.892 Controller Capabilities/Features 00:27:56.892 ================================ 00:27:56.892 Vendor ID: 0000 00:27:56.892 Subsystem Vendor ID: 0000 00:27:56.892 Serial Number: d0fbf1a36eb45ada11f2 00:27:56.892 Model Number: Linux 00:27:56.892 Firmware Version: 6.7.0-68 00:27:56.892 Recommended Arb Burst: 0 00:27:56.892 IEEE OUI Identifier: 00 00 00 00:27:56.892 Multi-path I/O 00:27:56.892 May have multiple subsystem ports: No 00:27:56.892 May have multiple controllers: No 00:27:56.892 Associated with SR-IOV VF: No 00:27:56.892 Max Data Transfer Size: Unlimited 00:27:56.892 Max Number of Namespaces: 0 00:27:56.892 Max Number of I/O Queues: 1024 00:27:56.892 NVMe Specification Version (VS): 1.3 00:27:56.892 NVMe Specification Version (Identify): 1.3 00:27:56.892 Maximum Queue Entries: 1024 00:27:56.892 Contiguous Queues Required: No 00:27:56.892 Arbitration Mechanisms Supported 00:27:56.892 Weighted Round Robin: Not Supported 00:27:56.892 Vendor Specific: Not Supported 00:27:56.892 Reset Timeout: 7500 ms 00:27:56.892 Doorbell Stride: 4 bytes 00:27:56.892 NVM Subsystem Reset: Not Supported 00:27:56.892 Command Sets Supported 00:27:56.892 NVM Command Set: Supported 00:27:56.892 Boot Partition: Not Supported 00:27:56.892 Memory Page Size Minimum: 4096 bytes 00:27:56.892 Memory Page Size Maximum: 4096 bytes 00:27:56.892 Persistent Memory Region: Not Supported 00:27:56.892 Optional Asynchronous Events Supported 00:27:56.892 Namespace Attribute Notices: Not Supported 00:27:56.892 Firmware Activation Notices: Not Supported 00:27:56.892 ANA Change Notices: Not Supported 00:27:56.892 PLE Aggregate Log Change Notices: Not Supported 00:27:56.892 LBA Status Info Alert Notices: Not Supported 00:27:56.892 EGE Aggregate Log Change Notices: Not Supported 00:27:56.892 Normal NVM Subsystem Shutdown event: Not Supported 00:27:56.892 Zone Descriptor Change Notices: Not Supported 00:27:56.892 Discovery Log Change Notices: Supported 00:27:56.892 Controller Attributes 00:27:56.892 128-bit Host Identifier: Not Supported 00:27:56.892 Non-Operational Permissive Mode: Not Supported 00:27:56.892 NVM Sets: Not Supported 00:27:56.892 Read Recovery Levels: Not Supported 00:27:56.892 Endurance Groups: Not Supported 00:27:56.892 Predictable Latency Mode: Not Supported 00:27:56.892 Traffic Based Keep ALive: Not Supported 00:27:56.892 Namespace Granularity: Not Supported 00:27:56.892 SQ Associations: Not Supported 00:27:56.892 UUID List: Not Supported 00:27:56.892 Multi-Domain Subsystem: Not Supported 00:27:56.892 Fixed Capacity Management: Not Supported 00:27:56.892 Variable Capacity Management: Not Supported 00:27:56.892 Delete Endurance Group: Not Supported 00:27:56.892 Delete NVM Set: Not Supported 00:27:56.892 Extended LBA Formats Supported: Not Supported 00:27:56.892 Flexible Data Placement Supported: Not Supported 00:27:56.892 00:27:56.892 Controller Memory Buffer Support 00:27:56.892 ================================ 00:27:56.892 Supported: No 00:27:56.892 00:27:56.892 Persistent Memory Region Support 00:27:56.892 ================================ 00:27:56.892 Supported: No 00:27:56.892 00:27:56.892 Admin Command Set Attributes 00:27:56.892 ============================ 00:27:56.892 Security Send/Receive: Not Supported 00:27:56.892 Format NVM: Not Supported 00:27:56.893 Firmware Activate/Download: Not Supported 00:27:56.893 Namespace Management: Not Supported 00:27:56.893 Device Self-Test: Not Supported 00:27:56.893 Directives: Not Supported 00:27:56.893 NVMe-MI: Not Supported 00:27:56.893 Virtualization Management: Not Supported 00:27:56.893 Doorbell Buffer Config: Not Supported 00:27:56.893 Get LBA Status Capability: Not Supported 00:27:56.893 Command & Feature Lockdown Capability: Not Supported 00:27:56.893 Abort Command Limit: 1 00:27:56.893 Async Event Request Limit: 1 00:27:56.893 Number of Firmware Slots: N/A 00:27:56.893 Firmware Slot 1 Read-Only: N/A 00:27:56.893 Firmware Activation Without Reset: N/A 00:27:56.893 Multiple Update Detection Support: N/A 00:27:56.893 Firmware Update Granularity: No Information Provided 00:27:56.893 Per-Namespace SMART Log: No 00:27:56.893 Asymmetric Namespace Access Log Page: Not Supported 00:27:56.893 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:56.893 Command Effects Log Page: Not Supported 00:27:56.893 Get Log Page Extended Data: Supported 00:27:56.893 Telemetry Log Pages: Not Supported 00:27:56.893 Persistent Event Log Pages: Not Supported 00:27:56.893 Supported Log Pages Log Page: May Support 00:27:56.893 Commands Supported & Effects Log Page: Not Supported 00:27:56.893 Feature Identifiers & Effects Log Page:May Support 00:27:56.893 NVMe-MI Commands & Effects Log Page: May Support 00:27:56.893 Data Area 4 for Telemetry Log: Not Supported 00:27:56.893 Error Log Page Entries Supported: 1 00:27:56.893 Keep Alive: Not Supported 00:27:56.893 00:27:56.893 NVM Command Set Attributes 00:27:56.893 ========================== 00:27:56.893 Submission Queue Entry Size 00:27:56.893 Max: 1 00:27:56.893 Min: 1 00:27:56.893 Completion Queue Entry Size 00:27:56.893 Max: 1 00:27:56.893 Min: 1 00:27:56.893 Number of Namespaces: 0 00:27:56.893 Compare Command: Not Supported 00:27:56.893 Write Uncorrectable Command: Not Supported 00:27:56.893 Dataset Management Command: Not Supported 00:27:56.893 Write Zeroes Command: Not Supported 00:27:56.893 Set Features Save Field: Not Supported 00:27:56.893 Reservations: Not Supported 00:27:56.893 Timestamp: Not Supported 00:27:56.893 Copy: Not Supported 00:27:56.893 Volatile Write Cache: Not Present 00:27:56.893 Atomic Write Unit (Normal): 1 00:27:56.893 Atomic Write Unit (PFail): 1 00:27:56.893 Atomic Compare & Write Unit: 1 00:27:56.893 Fused Compare & Write: Not Supported 00:27:56.893 Scatter-Gather List 00:27:56.893 SGL Command Set: Supported 00:27:56.893 SGL Keyed: Not Supported 00:27:56.893 SGL Bit Bucket Descriptor: Not Supported 00:27:56.893 SGL Metadata Pointer: Not Supported 00:27:56.893 Oversized SGL: Not Supported 00:27:56.893 SGL Metadata Address: Not Supported 00:27:56.893 SGL Offset: Supported 00:27:56.893 Transport SGL Data Block: Not Supported 00:27:56.893 Replay Protected Memory Block: Not Supported 00:27:56.893 00:27:56.893 Firmware Slot Information 00:27:56.893 ========================= 00:27:56.893 Active slot: 0 00:27:56.893 00:27:56.893 00:27:56.893 Error Log 00:27:56.893 ========= 00:27:56.893 00:27:56.893 Active Namespaces 00:27:56.893 ================= 00:27:56.893 Discovery Log Page 00:27:56.893 ================== 00:27:56.893 Generation Counter: 2 00:27:56.893 Number of Records: 2 00:27:56.893 Record Format: 0 00:27:56.893 00:27:56.893 Discovery Log Entry 0 00:27:56.893 ---------------------- 00:27:56.893 Transport Type: 3 (TCP) 00:27:56.893 Address Family: 1 (IPv4) 00:27:56.893 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:56.893 Entry Flags: 00:27:56.893 Duplicate Returned Information: 0 00:27:56.893 Explicit Persistent Connection Support for Discovery: 0 00:27:56.893 Transport Requirements: 00:27:56.893 Secure Channel: Not Specified 00:27:56.893 Port ID: 1 (0x0001) 00:27:56.893 Controller ID: 65535 (0xffff) 00:27:56.893 Admin Max SQ Size: 32 00:27:56.893 Transport Service Identifier: 4420 00:27:56.893 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:56.893 Transport Address: 10.0.0.1 00:27:56.893 Discovery Log Entry 1 00:27:56.893 ---------------------- 00:27:56.893 Transport Type: 3 (TCP) 00:27:56.893 Address Family: 1 (IPv4) 00:27:56.893 Subsystem Type: 2 (NVM Subsystem) 00:27:56.893 Entry Flags: 00:27:56.893 Duplicate Returned Information: 0 00:27:56.893 Explicit Persistent Connection Support for Discovery: 0 00:27:56.893 Transport Requirements: 00:27:56.893 Secure Channel: Not Specified 00:27:56.893 Port ID: 1 (0x0001) 00:27:56.893 Controller ID: 65535 (0xffff) 00:27:56.893 Admin Max SQ Size: 32 00:27:56.893 Transport Service Identifier: 4420 00:27:56.893 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:56.893 Transport Address: 10.0.0.1 00:27:56.893 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:56.893 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.893 get_feature(0x01) failed 00:27:56.893 get_feature(0x02) failed 00:27:56.893 get_feature(0x04) failed 00:27:56.893 ===================================================== 00:27:56.893 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:56.893 ===================================================== 00:27:56.893 Controller Capabilities/Features 00:27:56.893 ================================ 00:27:56.893 Vendor ID: 0000 00:27:56.893 Subsystem Vendor ID: 0000 00:27:56.893 Serial Number: e54e907e1a633eaf6f10 00:27:56.893 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:56.893 Firmware Version: 6.7.0-68 00:27:56.893 Recommended Arb Burst: 6 00:27:56.893 IEEE OUI Identifier: 00 00 00 00:27:56.893 Multi-path I/O 00:27:56.893 May have multiple subsystem ports: Yes 00:27:56.893 May have multiple controllers: Yes 00:27:56.893 Associated with SR-IOV VF: No 00:27:56.893 Max Data Transfer Size: Unlimited 00:27:56.893 Max Number of Namespaces: 1024 00:27:56.893 Max Number of I/O Queues: 128 00:27:56.893 NVMe Specification Version (VS): 1.3 00:27:56.893 NVMe Specification Version (Identify): 1.3 00:27:56.893 Maximum Queue Entries: 1024 00:27:56.893 Contiguous Queues Required: No 00:27:56.893 Arbitration Mechanisms Supported 00:27:56.893 Weighted Round Robin: Not Supported 00:27:56.893 Vendor Specific: Not Supported 00:27:56.893 Reset Timeout: 7500 ms 00:27:56.893 Doorbell Stride: 4 bytes 00:27:56.893 NVM Subsystem Reset: Not Supported 00:27:56.893 Command Sets Supported 00:27:56.893 NVM Command Set: Supported 00:27:56.893 Boot Partition: Not Supported 00:27:56.893 Memory Page Size Minimum: 4096 bytes 00:27:56.893 Memory Page Size Maximum: 4096 bytes 00:27:56.893 Persistent Memory Region: Not Supported 00:27:56.893 Optional Asynchronous Events Supported 00:27:56.893 Namespace Attribute Notices: Supported 00:27:56.893 Firmware Activation Notices: Not Supported 00:27:56.893 ANA Change Notices: Supported 00:27:56.893 PLE Aggregate Log Change Notices: Not Supported 00:27:56.893 LBA Status Info Alert Notices: Not Supported 00:27:56.893 EGE Aggregate Log Change Notices: Not Supported 00:27:56.893 Normal NVM Subsystem Shutdown event: Not Supported 00:27:56.893 Zone Descriptor Change Notices: Not Supported 00:27:56.893 Discovery Log Change Notices: Not Supported 00:27:56.893 Controller Attributes 00:27:56.893 128-bit Host Identifier: Supported 00:27:56.893 Non-Operational Permissive Mode: Not Supported 00:27:56.893 NVM Sets: Not Supported 00:27:56.894 Read Recovery Levels: Not Supported 00:27:56.894 Endurance Groups: Not Supported 00:27:56.894 Predictable Latency Mode: Not Supported 00:27:56.894 Traffic Based Keep ALive: Supported 00:27:56.894 Namespace Granularity: Not Supported 00:27:56.894 SQ Associations: Not Supported 00:27:56.894 UUID List: Not Supported 00:27:56.894 Multi-Domain Subsystem: Not Supported 00:27:56.894 Fixed Capacity Management: Not Supported 00:27:56.894 Variable Capacity Management: Not Supported 00:27:56.894 Delete Endurance Group: Not Supported 00:27:56.894 Delete NVM Set: Not Supported 00:27:56.894 Extended LBA Formats Supported: Not Supported 00:27:56.894 Flexible Data Placement Supported: Not Supported 00:27:56.894 00:27:56.894 Controller Memory Buffer Support 00:27:56.894 ================================ 00:27:56.894 Supported: No 00:27:56.894 00:27:56.894 Persistent Memory Region Support 00:27:56.894 ================================ 00:27:56.894 Supported: No 00:27:56.894 00:27:56.894 Admin Command Set Attributes 00:27:56.894 ============================ 00:27:56.894 Security Send/Receive: Not Supported 00:27:56.894 Format NVM: Not Supported 00:27:56.894 Firmware Activate/Download: Not Supported 00:27:56.894 Namespace Management: Not Supported 00:27:56.894 Device Self-Test: Not Supported 00:27:56.894 Directives: Not Supported 00:27:56.894 NVMe-MI: Not Supported 00:27:56.894 Virtualization Management: Not Supported 00:27:56.894 Doorbell Buffer Config: Not Supported 00:27:56.894 Get LBA Status Capability: Not Supported 00:27:56.894 Command & Feature Lockdown Capability: Not Supported 00:27:56.894 Abort Command Limit: 4 00:27:56.894 Async Event Request Limit: 4 00:27:56.894 Number of Firmware Slots: N/A 00:27:56.894 Firmware Slot 1 Read-Only: N/A 00:27:56.894 Firmware Activation Without Reset: N/A 00:27:56.894 Multiple Update Detection Support: N/A 00:27:56.894 Firmware Update Granularity: No Information Provided 00:27:56.894 Per-Namespace SMART Log: Yes 00:27:56.894 Asymmetric Namespace Access Log Page: Supported 00:27:56.894 ANA Transition Time : 10 sec 00:27:56.894 00:27:56.894 Asymmetric Namespace Access Capabilities 00:27:56.894 ANA Optimized State : Supported 00:27:56.894 ANA Non-Optimized State : Supported 00:27:56.894 ANA Inaccessible State : Supported 00:27:56.894 ANA Persistent Loss State : Supported 00:27:56.894 ANA Change State : Supported 00:27:56.894 ANAGRPID is not changed : No 00:27:56.894 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:56.894 00:27:56.894 ANA Group Identifier Maximum : 128 00:27:56.894 Number of ANA Group Identifiers : 128 00:27:56.894 Max Number of Allowed Namespaces : 1024 00:27:56.894 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:56.894 Command Effects Log Page: Supported 00:27:56.894 Get Log Page Extended Data: Supported 00:27:56.894 Telemetry Log Pages: Not Supported 00:27:56.894 Persistent Event Log Pages: Not Supported 00:27:56.894 Supported Log Pages Log Page: May Support 00:27:56.894 Commands Supported & Effects Log Page: Not Supported 00:27:56.894 Feature Identifiers & Effects Log Page:May Support 00:27:56.894 NVMe-MI Commands & Effects Log Page: May Support 00:27:56.894 Data Area 4 for Telemetry Log: Not Supported 00:27:56.894 Error Log Page Entries Supported: 128 00:27:56.894 Keep Alive: Supported 00:27:56.894 Keep Alive Granularity: 1000 ms 00:27:56.894 00:27:56.894 NVM Command Set Attributes 00:27:56.894 ========================== 00:27:56.894 Submission Queue Entry Size 00:27:56.894 Max: 64 00:27:56.894 Min: 64 00:27:56.894 Completion Queue Entry Size 00:27:56.894 Max: 16 00:27:56.894 Min: 16 00:27:56.894 Number of Namespaces: 1024 00:27:56.894 Compare Command: Not Supported 00:27:56.894 Write Uncorrectable Command: Not Supported 00:27:56.894 Dataset Management Command: Supported 00:27:56.894 Write Zeroes Command: Supported 00:27:56.894 Set Features Save Field: Not Supported 00:27:56.894 Reservations: Not Supported 00:27:56.894 Timestamp: Not Supported 00:27:56.894 Copy: Not Supported 00:27:56.894 Volatile Write Cache: Present 00:27:56.894 Atomic Write Unit (Normal): 1 00:27:56.894 Atomic Write Unit (PFail): 1 00:27:56.894 Atomic Compare & Write Unit: 1 00:27:56.894 Fused Compare & Write: Not Supported 00:27:56.894 Scatter-Gather List 00:27:56.894 SGL Command Set: Supported 00:27:56.894 SGL Keyed: Not Supported 00:27:56.894 SGL Bit Bucket Descriptor: Not Supported 00:27:56.894 SGL Metadata Pointer: Not Supported 00:27:56.894 Oversized SGL: Not Supported 00:27:56.894 SGL Metadata Address: Not Supported 00:27:56.894 SGL Offset: Supported 00:27:56.894 Transport SGL Data Block: Not Supported 00:27:56.894 Replay Protected Memory Block: Not Supported 00:27:56.894 00:27:56.894 Firmware Slot Information 00:27:56.894 ========================= 00:27:56.894 Active slot: 0 00:27:56.894 00:27:56.894 Asymmetric Namespace Access 00:27:56.894 =========================== 00:27:56.894 Change Count : 0 00:27:56.894 Number of ANA Group Descriptors : 1 00:27:56.894 ANA Group Descriptor : 0 00:27:56.894 ANA Group ID : 1 00:27:56.894 Number of NSID Values : 1 00:27:56.894 Change Count : 0 00:27:56.894 ANA State : 1 00:27:56.894 Namespace Identifier : 1 00:27:56.894 00:27:56.894 Commands Supported and Effects 00:27:56.894 ============================== 00:27:56.894 Admin Commands 00:27:56.894 -------------- 00:27:56.894 Get Log Page (02h): Supported 00:27:56.894 Identify (06h): Supported 00:27:56.894 Abort (08h): Supported 00:27:56.894 Set Features (09h): Supported 00:27:56.894 Get Features (0Ah): Supported 00:27:56.894 Asynchronous Event Request (0Ch): Supported 00:27:56.894 Keep Alive (18h): Supported 00:27:56.894 I/O Commands 00:27:56.894 ------------ 00:27:56.894 Flush (00h): Supported 00:27:56.894 Write (01h): Supported LBA-Change 00:27:56.894 Read (02h): Supported 00:27:56.894 Write Zeroes (08h): Supported LBA-Change 00:27:56.894 Dataset Management (09h): Supported 00:27:56.894 00:27:56.894 Error Log 00:27:56.894 ========= 00:27:56.894 Entry: 0 00:27:56.894 Error Count: 0x3 00:27:56.894 Submission Queue Id: 0x0 00:27:56.894 Command Id: 0x5 00:27:56.894 Phase Bit: 0 00:27:56.894 Status Code: 0x2 00:27:56.894 Status Code Type: 0x0 00:27:56.894 Do Not Retry: 1 00:27:56.894 Error Location: 0x28 00:27:56.894 LBA: 0x0 00:27:56.894 Namespace: 0x0 00:27:56.894 Vendor Log Page: 0x0 00:27:56.894 ----------- 00:27:56.894 Entry: 1 00:27:56.894 Error Count: 0x2 00:27:56.894 Submission Queue Id: 0x0 00:27:56.894 Command Id: 0x5 00:27:56.894 Phase Bit: 0 00:27:56.894 Status Code: 0x2 00:27:56.894 Status Code Type: 0x0 00:27:56.894 Do Not Retry: 1 00:27:56.894 Error Location: 0x28 00:27:56.894 LBA: 0x0 00:27:56.894 Namespace: 0x0 00:27:56.894 Vendor Log Page: 0x0 00:27:56.894 ----------- 00:27:56.894 Entry: 2 00:27:56.894 Error Count: 0x1 00:27:56.894 Submission Queue Id: 0x0 00:27:56.894 Command Id: 0x4 00:27:56.894 Phase Bit: 0 00:27:56.894 Status Code: 0x2 00:27:56.894 Status Code Type: 0x0 00:27:56.894 Do Not Retry: 1 00:27:56.894 Error Location: 0x28 00:27:56.894 LBA: 0x0 00:27:56.894 Namespace: 0x0 00:27:56.894 Vendor Log Page: 0x0 00:27:56.894 00:27:56.894 Number of Queues 00:27:56.894 ================ 00:27:56.894 Number of I/O Submission Queues: 128 00:27:56.894 Number of I/O Completion Queues: 128 00:27:56.894 00:27:56.894 ZNS Specific Controller Data 00:27:56.894 ============================ 00:27:56.894 Zone Append Size Limit: 0 00:27:56.894 00:27:56.894 00:27:56.894 Active Namespaces 00:27:56.894 ================= 00:27:56.894 get_feature(0x05) failed 00:27:56.894 Namespace ID:1 00:27:56.894 Command Set Identifier: NVM (00h) 00:27:56.894 Deallocate: Supported 00:27:56.894 Deallocated/Unwritten Error: Not Supported 00:27:56.894 Deallocated Read Value: Unknown 00:27:56.894 Deallocate in Write Zeroes: Not Supported 00:27:56.894 Deallocated Guard Field: 0xFFFF 00:27:56.894 Flush: Supported 00:27:56.894 Reservation: Not Supported 00:27:56.894 Namespace Sharing Capabilities: Multiple Controllers 00:27:56.894 Size (in LBAs): 3750748848 (1788GiB) 00:27:56.894 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:56.894 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:56.894 UUID: f96a9927-8bf1-4028-b03d-1cd6c9f70052 00:27:56.894 Thin Provisioning: Not Supported 00:27:56.894 Per-NS Atomic Units: Yes 00:27:56.894 Atomic Write Unit (Normal): 8 00:27:56.894 Atomic Write Unit (PFail): 8 00:27:56.894 Preferred Write Granularity: 8 00:27:56.894 Atomic Compare & Write Unit: 8 00:27:56.894 Atomic Boundary Size (Normal): 0 00:27:56.894 Atomic Boundary Size (PFail): 0 00:27:56.894 Atomic Boundary Offset: 0 00:27:56.894 NGUID/EUI64 Never Reused: No 00:27:56.894 ANA group ID: 1 00:27:56.894 Namespace Write Protected: No 00:27:56.894 Number of LBA Formats: 1 00:27:56.895 Current LBA Format: LBA Format #00 00:27:56.895 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:56.895 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:56.895 rmmod nvme_tcp 00:27:56.895 rmmod nvme_fabrics 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.895 19:24:02 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:59.437 19:24:05 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:02.741 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.741 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:03.003 00:28:03.003 real 0m18.575s 00:28:03.003 user 0m4.985s 00:28:03.003 sys 0m10.601s 00:28:03.003 19:24:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:03.003 19:24:09 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.003 ************************************ 00:28:03.003 END TEST nvmf_identify_kernel_target 00:28:03.003 ************************************ 00:28:03.003 19:24:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:03.003 19:24:09 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:03.003 19:24:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:03.003 19:24:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:03.003 19:24:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:03.003 ************************************ 00:28:03.003 START TEST nvmf_auth_host 00:28:03.003 ************************************ 00:28:03.003 19:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:03.264 * Looking for test storage... 00:28:03.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:03.264 19:24:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:28:09.857 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:28:09.857 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.857 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:28:09.858 Found net devices under 0000:4b:00.0: cvl_0_0 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:28:09.858 Found net devices under 0000:4b:00.1: cvl_0_1 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.858 19:24:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.119 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.119 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.119 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.119 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.119 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.510 ms 00:28:10.380 00:28:10.380 --- 10.0.0.2 ping statistics --- 00:28:10.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.380 rtt min/avg/max/mdev = 0.510/0.510/0.510/0.000 ms 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:28:10.380 00:28:10.380 --- 10.0.0.1 ping statistics --- 00:28:10.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.380 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1585260 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1585260 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1585260 ']' 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.380 19:24:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.321 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.321 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:11.321 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:11.321 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:11.321 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.321 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.321 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:11.321 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=20ff7338d1a9a87ce2e0a8a9a00ffe56 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.YKV 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 20ff7338d1a9a87ce2e0a8a9a00ffe56 0 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 20ff7338d1a9a87ce2e0a8a9a00ffe56 0 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=20ff7338d1a9a87ce2e0a8a9a00ffe56 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.YKV 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.YKV 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.YKV 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4a9923cf2039aa78fdfc52b409b50251f0cffbedeb6b3264ae3ea5f6a853cc90 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.NSv 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4a9923cf2039aa78fdfc52b409b50251f0cffbedeb6b3264ae3ea5f6a853cc90 3 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4a9923cf2039aa78fdfc52b409b50251f0cffbedeb6b3264ae3ea5f6a853cc90 3 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4a9923cf2039aa78fdfc52b409b50251f0cffbedeb6b3264ae3ea5f6a853cc90 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.NSv 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.NSv 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.NSv 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=327074b43d70176376edd5370cde700ad87465d5fc5ac86c 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.QrM 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 327074b43d70176376edd5370cde700ad87465d5fc5ac86c 0 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 327074b43d70176376edd5370cde700ad87465d5fc5ac86c 0 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=327074b43d70176376edd5370cde700ad87465d5fc5ac86c 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.QrM 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.QrM 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.QrM 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e66378dec5d9290accfcc6baca3ee8de0c3418d213999059 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hmg 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e66378dec5d9290accfcc6baca3ee8de0c3418d213999059 2 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e66378dec5d9290accfcc6baca3ee8de0c3418d213999059 2 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e66378dec5d9290accfcc6baca3ee8de0c3418d213999059 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hmg 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hmg 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.hmg 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:11.322 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=9093ae611843ba1e6b29912c604e5e03 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.VO5 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 9093ae611843ba1e6b29912c604e5e03 1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 9093ae611843ba1e6b29912c604e5e03 1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=9093ae611843ba1e6b29912c604e5e03 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.VO5 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.VO5 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.VO5 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8a5f0da1ecb31dd58fd35cdd6c94e85a 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.z1W 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8a5f0da1ecb31dd58fd35cdd6c94e85a 1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8a5f0da1ecb31dd58fd35cdd6c94e85a 1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8a5f0da1ecb31dd58fd35cdd6c94e85a 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.z1W 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.z1W 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.z1W 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e8ffed36f682a383ed637673d1f7bfbe260fb782ff4a968c 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.63G 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e8ffed36f682a383ed637673d1f7bfbe260fb782ff4a968c 2 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e8ffed36f682a383ed637673d1f7bfbe260fb782ff4a968c 2 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e8ffed36f682a383ed637673d1f7bfbe260fb782ff4a968c 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.63G 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.63G 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.63G 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2948002a6e73bfe12f098945e07fda57 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.9EP 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2948002a6e73bfe12f098945e07fda57 0 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2948002a6e73bfe12f098945e07fda57 0 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2948002a6e73bfe12f098945e07fda57 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.9EP 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.9EP 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.9EP 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8eee483fe8b277ef94a90502275a90b1a7c2f46724a9e4c22ee79fa00daec79f 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.1H0 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8eee483fe8b277ef94a90502275a90b1a7c2f46724a9e4c22ee79fa00daec79f 3 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8eee483fe8b277ef94a90502275a90b1a7c2f46724a9e4c22ee79fa00daec79f 3 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8eee483fe8b277ef94a90502275a90b1a7c2f46724a9e4c22ee79fa00daec79f 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:11.583 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.1H0 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.1H0 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.1H0 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1585260 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1585260 ']' 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:11.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:11.843 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YKV 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.NSv ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.NSv 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.QrM 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.hmg ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hmg 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.VO5 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.z1W ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z1W 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.63G 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.9EP ]] 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.9EP 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.844 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.1H0 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:12.104 19:24:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:12.104 19:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:12.104 19:24:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:15.405 Waiting for block devices as requested 00:28:15.405 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:15.405 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:15.405 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:15.405 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:15.405 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:15.665 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:15.665 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:15.665 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:15.926 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:15.926 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:15.926 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:16.188 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:16.188 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:16.188 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:16.448 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:16.448 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:16.448 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:17.389 No valid GPT data, bailing 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:28:17.389 00:28:17.389 Discovery Log Number of Records 2, Generation counter 2 00:28:17.389 =====Discovery Log Entry 0====== 00:28:17.389 trtype: tcp 00:28:17.389 adrfam: ipv4 00:28:17.389 subtype: current discovery subsystem 00:28:17.389 treq: not specified, sq flow control disable supported 00:28:17.389 portid: 1 00:28:17.389 trsvcid: 4420 00:28:17.389 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:17.389 traddr: 10.0.0.1 00:28:17.389 eflags: none 00:28:17.389 sectype: none 00:28:17.389 =====Discovery Log Entry 1====== 00:28:17.389 trtype: tcp 00:28:17.389 adrfam: ipv4 00:28:17.389 subtype: nvme subsystem 00:28:17.389 treq: not specified, sq flow control disable supported 00:28:17.389 portid: 1 00:28:17.389 trsvcid: 4420 00:28:17.389 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:17.389 traddr: 10.0.0.1 00:28:17.389 eflags: none 00:28:17.389 sectype: none 00:28:17.389 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.650 nvme0n1 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.650 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.911 nvme0n1 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:17.911 19:24:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:17.911 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.912 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.173 nvme0n1 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.173 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.434 nvme0n1 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.434 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.695 nvme0n1 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.695 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.957 nvme0n1 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.957 19:24:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.219 nvme0n1 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.219 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.480 nvme0n1 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.480 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.481 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.741 nvme0n1 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.741 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.742 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.002 nvme0n1 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.002 19:24:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.263 nvme0n1 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.263 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.525 nvme0n1 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.525 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.787 nvme0n1 00:28:20.787 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.787 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.787 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.787 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.787 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.787 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.048 19:24:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.309 nvme0n1 00:28:21.309 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.309 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.309 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.309 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.309 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.309 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.310 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.571 nvme0n1 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.571 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.831 nvme0n1 00:28:21.831 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.831 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.831 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.831 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.831 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.831 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.092 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.092 19:24:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.092 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.092 19:24:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.092 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.368 nvme0n1 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.636 19:24:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.208 nvme0n1 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.208 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.469 nvme0n1 00:28:23.469 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.469 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.469 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.469 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.469 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.729 19:24:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.299 nvme0n1 00:28:24.299 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.299 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.299 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.299 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.299 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.299 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.299 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.300 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.559 nvme0n1 00:28:24.559 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.559 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.559 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.559 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.559 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.559 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.819 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.820 19:24:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.390 nvme0n1 00:28:25.390 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.390 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.390 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.390 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.390 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.390 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.650 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.651 19:24:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.223 nvme0n1 00:28:26.223 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.223 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.223 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.223 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.223 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.223 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.483 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.484 19:24:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.055 nvme0n1 00:28:27.055 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.055 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.055 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.055 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.055 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.055 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.315 19:24:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.316 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.316 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.316 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.887 nvme0n1 00:28:27.887 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.887 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.887 19:24:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.887 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.887 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.887 19:24:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.148 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.149 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.721 nvme0n1 00:28:28.721 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.721 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.721 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.721 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.721 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.721 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.981 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.982 19:24:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.982 nvme0n1 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:28.982 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.242 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:29.242 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.243 nvme0n1 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.243 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.503 nvme0n1 00:28:29.503 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.504 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.765 nvme0n1 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.765 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.027 nvme0n1 00:28:30.027 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.027 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.027 19:24:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.027 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.027 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.027 19:24:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.027 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.028 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.028 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.028 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.028 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.028 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.028 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.028 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.289 nvme0n1 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.289 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.550 nvme0n1 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.550 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.811 nvme0n1 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.811 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.812 19:24:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.071 nvme0n1 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.071 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.072 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.332 nvme0n1 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.332 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.593 nvme0n1 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.593 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.164 nvme0n1 00:28:32.164 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.164 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.164 19:24:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.164 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.164 19:24:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.164 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.424 nvme0n1 00:28:32.424 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.424 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.425 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.685 nvme0n1 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.685 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.686 19:24:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.946 nvme0n1 00:28:32.946 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.946 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.946 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.946 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.946 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.946 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.206 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.207 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.466 nvme0n1 00:28:33.466 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:33.727 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.728 19:24:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.299 nvme0n1 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.299 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.300 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.560 nvme0n1 00:28:34.560 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.560 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.820 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.821 19:24:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.392 nvme0n1 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.392 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.652 nvme0n1 00:28:35.652 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.652 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.652 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.652 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.652 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.913 19:24:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.484 nvme0n1 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.484 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.485 19:24:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.425 nvme0n1 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.425 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.426 19:24:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.369 nvme0n1 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.369 19:24:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 nvme0n1 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.941 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.202 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.202 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.202 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:39.202 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.203 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.775 nvme0n1 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.775 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.036 19:24:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.036 nvme0n1 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.036 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.037 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.037 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.037 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.037 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.037 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.037 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.037 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.037 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.298 nvme0n1 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.298 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.559 nvme0n1 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.559 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.560 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.821 nvme0n1 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.821 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.822 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.083 nvme0n1 00:28:41.083 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.083 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.083 19:24:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.083 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.083 19:24:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.083 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.345 nvme0n1 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.345 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.606 nvme0n1 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.606 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.607 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.867 nvme0n1 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.867 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.868 19:24:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.128 nvme0n1 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.128 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.129 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.389 nvme0n1 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.389 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.390 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.651 nvme0n1 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.651 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.912 19:24:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.172 nvme0n1 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.172 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.173 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.433 nvme0n1 00:28:43.433 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.433 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.433 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.433 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.433 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.433 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.433 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.434 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.695 nvme0n1 00:28:43.695 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.695 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.695 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.695 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.695 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.695 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.956 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.957 19:24:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.218 nvme0n1 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.218 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.219 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.791 nvme0n1 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.791 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.792 19:24:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.363 nvme0n1 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.363 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.960 nvme0n1 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.960 19:24:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.531 nvme0n1 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.531 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.532 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.532 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.532 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.532 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.793 nvme0n1 00:28:46.793 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.793 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.793 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.793 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.793 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MjBmZjczMzhkMWE5YTg3Y2UyZTBhOGE5YTAwZmZlNTZgoz01: 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: ]] 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NGE5OTIzY2YyMDM5YWE3OGZkZmM1MmI0MDliNTAyNTFmMGNmZmJlZGViNmIzMjY0YWUzZWE1ZjZhODUzY2M5MLoBp4s=: 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.053 19:24:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.624 nvme0n1 00:28:47.624 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.624 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.624 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.624 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.624 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.883 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.883 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.883 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.883 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.883 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.884 19:24:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.453 nvme0n1 00:28:48.453 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.453 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.453 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.453 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.453 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:OTA5M2FlNjExODQzYmExZTZiMjk5MTJjNjA0ZTVlMDPaJL0F: 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: ]] 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGE1ZjBkYTFlY2IzMWRkNThmZDM1Y2RkNmM5NGU4NWET0eLS: 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.713 19:24:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.283 nvme0n1 00:28:49.283 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZThmZmVkMzZmNjgyYTM4M2VkNjM3NjczZDFmN2JmYmUyNjBmYjc4MmZmNGE5Njhjks+XoQ==: 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Mjk0ODAwMmE2ZTczYmZlMTJmMDk4OTQ1ZTA3ZmRhNTeS+ssY: 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.543 19:24:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.482 nvme0n1 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGVlZTQ4M2ZlOGIyNzdlZjk0YTkwNTAyMjc1YTkwYjFhN2MyZjQ2NzI0YTllNGMyMmVlNzlmYTAwZGFlYzc5ZuznigY=: 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.482 19:24:56 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.052 nvme0n1 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzI3MDc0YjQzZDcwMTc2Mzc2ZWRkNTM3MGNkZTcwMGFkODc0NjVkNWZjNWFjODZjVii88Q==: 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZTY2Mzc4ZGVjNWQ5MjkwYWNjZmNjNmJhY2EzZWU4ZGUwYzM0MThkMjEzOTk5MDU5BJk1GA==: 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.052 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.313 request: 00:28:51.313 { 00:28:51.313 "name": "nvme0", 00:28:51.313 "trtype": "tcp", 00:28:51.313 "traddr": "10.0.0.1", 00:28:51.313 "adrfam": "ipv4", 00:28:51.313 "trsvcid": "4420", 00:28:51.313 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:51.313 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:51.313 "prchk_reftag": false, 00:28:51.313 "prchk_guard": false, 00:28:51.313 "hdgst": false, 00:28:51.313 "ddgst": false, 00:28:51.313 "method": "bdev_nvme_attach_controller", 00:28:51.313 "req_id": 1 00:28:51.313 } 00:28:51.313 Got JSON-RPC error response 00:28:51.313 response: 00:28:51.313 { 00:28:51.313 "code": -5, 00:28:51.313 "message": "Input/output error" 00:28:51.313 } 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.313 request: 00:28:51.313 { 00:28:51.313 "name": "nvme0", 00:28:51.313 "trtype": "tcp", 00:28:51.313 "traddr": "10.0.0.1", 00:28:51.313 "adrfam": "ipv4", 00:28:51.313 "trsvcid": "4420", 00:28:51.313 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:51.313 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:51.313 "prchk_reftag": false, 00:28:51.313 "prchk_guard": false, 00:28:51.313 "hdgst": false, 00:28:51.313 "ddgst": false, 00:28:51.313 "dhchap_key": "key2", 00:28:51.313 "method": "bdev_nvme_attach_controller", 00:28:51.313 "req_id": 1 00:28:51.313 } 00:28:51.313 Got JSON-RPC error response 00:28:51.313 response: 00:28:51.313 { 00:28:51.313 "code": -5, 00:28:51.313 "message": "Input/output error" 00:28:51.313 } 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:51.313 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.314 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.575 request: 00:28:51.575 { 00:28:51.575 "name": "nvme0", 00:28:51.575 "trtype": "tcp", 00:28:51.575 "traddr": "10.0.0.1", 00:28:51.575 "adrfam": "ipv4", 00:28:51.575 "trsvcid": "4420", 00:28:51.575 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:51.575 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:51.575 "prchk_reftag": false, 00:28:51.575 "prchk_guard": false, 00:28:51.575 "hdgst": false, 00:28:51.575 "ddgst": false, 00:28:51.575 "dhchap_key": "key1", 00:28:51.575 "dhchap_ctrlr_key": "ckey2", 00:28:51.575 "method": "bdev_nvme_attach_controller", 00:28:51.575 "req_id": 1 00:28:51.575 } 00:28:51.575 Got JSON-RPC error response 00:28:51.575 response: 00:28:51.575 { 00:28:51.575 "code": -5, 00:28:51.575 "message": "Input/output error" 00:28:51.575 } 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:51.575 rmmod nvme_tcp 00:28:51.575 rmmod nvme_fabrics 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1585260 ']' 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1585260 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1585260 ']' 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1585260 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1585260 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1585260' 00:28:51.575 killing process with pid 1585260 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1585260 00:28:51.575 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1585260 00:28:51.836 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:51.836 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:51.836 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:51.836 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:51.836 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:51.836 19:24:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.836 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.836 19:24:57 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:53.749 19:24:59 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:57.053 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.053 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.053 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.313 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:57.884 19:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.YKV /tmp/spdk.key-null.QrM /tmp/spdk.key-sha256.VO5 /tmp/spdk.key-sha384.63G /tmp/spdk.key-sha512.1H0 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:57.884 19:25:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:01.191 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:01.191 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:01.191 00:29:01.191 real 0m58.127s 00:29:01.191 user 0m52.240s 00:29:01.191 sys 0m14.707s 00:29:01.191 19:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.191 19:25:07 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.191 ************************************ 00:29:01.191 END TEST nvmf_auth_host 00:29:01.191 ************************************ 00:29:01.191 19:25:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:01.191 19:25:07 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:29:01.191 19:25:07 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:01.191 19:25:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:01.191 19:25:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.191 19:25:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.451 ************************************ 00:29:01.452 START TEST nvmf_digest 00:29:01.452 ************************************ 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:01.452 * Looking for test storage... 00:29:01.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:01.452 19:25:07 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:09.596 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:09.596 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:09.596 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:09.596 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.596 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:09.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.533 ms 00:29:09.596 00:29:09.596 --- 10.0.0.2 ping statistics --- 00:29:09.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.597 rtt min/avg/max/mdev = 0.533/0.533/0.533/0.000 ms 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.343 ms 00:29:09.597 00:29:09.597 --- 10.0.0.1 ping statistics --- 00:29:09.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.597 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:09.597 ************************************ 00:29:09.597 START TEST nvmf_digest_clean 00:29:09.597 ************************************ 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1601874 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1601874 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1601874 ']' 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.597 19:25:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.597 [2024-07-12 19:25:14.636469] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:09.597 [2024-07-12 19:25:14.636529] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:09.597 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.597 [2024-07-12 19:25:14.706513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.597 [2024-07-12 19:25:14.779540] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:09.597 [2024-07-12 19:25:14.779576] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:09.597 [2024-07-12 19:25:14.779584] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:09.597 [2024-07-12 19:25:14.779590] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:09.597 [2024-07-12 19:25:14.779595] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:09.597 [2024-07-12 19:25:14.779623] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.597 null0 00:29:09.597 [2024-07-12 19:25:15.518771] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:09.597 [2024-07-12 19:25:15.542952] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1602143 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1602143 /var/tmp/bperf.sock 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1602143 ']' 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.597 19:25:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.597 [2024-07-12 19:25:15.598513] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:09.597 [2024-07-12 19:25:15.598560] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602143 ] 00:29:09.597 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.597 [2024-07-12 19:25:15.673831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.858 [2024-07-12 19:25:15.738001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:10.429 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.429 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:10.429 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:10.429 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:10.429 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:10.690 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.690 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.951 nvme0n1 00:29:10.951 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:10.951 19:25:16 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.951 Running I/O for 2 seconds... 00:29:12.866 00:29:12.866 Latency(us) 00:29:12.866 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.866 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:12.866 nvme0n1 : 2.01 17758.05 69.37 0.00 0.00 7200.49 3522.56 19333.12 00:29:12.866 =================================================================================================================== 00:29:12.866 Total : 17758.05 69.37 0.00 0.00 7200.49 3522.56 19333.12 00:29:12.866 0 00:29:12.866 19:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:12.866 19:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:12.866 19:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:12.866 19:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:12.866 19:25:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:12.866 | select(.opcode=="crc32c") 00:29:12.866 | "\(.module_name) \(.executed)"' 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1602143 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1602143 ']' 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1602143 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1602143 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1602143' 00:29:13.127 killing process with pid 1602143 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1602143 00:29:13.127 Received shutdown signal, test time was about 2.000000 seconds 00:29:13.127 00:29:13.127 Latency(us) 00:29:13.127 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:13.127 =================================================================================================================== 00:29:13.127 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:13.127 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1602143 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1602907 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1602907 /var/tmp/bperf.sock 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1602907 ']' 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:13.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:13.388 19:25:19 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:13.388 [2024-07-12 19:25:19.360594] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:13.388 [2024-07-12 19:25:19.360648] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1602907 ] 00:29:13.388 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:13.388 Zero copy mechanism will not be used. 00:29:13.388 EAL: No free 2048 kB hugepages reported on node 1 00:29:13.388 [2024-07-12 19:25:19.434839] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:13.388 [2024-07-12 19:25:19.488081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.330 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:14.330 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:14.330 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:14.330 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:14.330 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:14.330 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.330 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:14.592 nvme0n1 00:29:14.592 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:14.592 19:25:20 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:14.592 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.592 Zero copy mechanism will not be used. 00:29:14.592 Running I/O for 2 seconds... 00:29:17.139 00:29:17.139 Latency(us) 00:29:17.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.139 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:17.139 nvme0n1 : 2.01 2731.05 341.38 0.00 0.00 5855.75 1037.65 15619.41 00:29:17.139 =================================================================================================================== 00:29:17.139 Total : 2731.05 341.38 0.00 0.00 5855.75 1037.65 15619.41 00:29:17.139 0 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:17.139 | select(.opcode=="crc32c") 00:29:17.139 | "\(.module_name) \(.executed)"' 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1602907 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1602907 ']' 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1602907 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1602907 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1602907' 00:29:17.139 killing process with pid 1602907 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1602907 00:29:17.139 Received shutdown signal, test time was about 2.000000 seconds 00:29:17.139 00:29:17.139 Latency(us) 00:29:17.139 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.139 =================================================================================================================== 00:29:17.139 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:17.139 19:25:22 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1602907 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1603587 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1603587 /var/tmp/bperf.sock 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1603587 ']' 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:17.139 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.139 [2024-07-12 19:25:23.067540] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:17.139 [2024-07-12 19:25:23.067594] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1603587 ] 00:29:17.139 EAL: No free 2048 kB hugepages reported on node 1 00:29:17.139 [2024-07-12 19:25:23.141449] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.139 [2024-07-12 19:25:23.193677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.711 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:17.711 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:17.711 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:17.711 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:17.711 19:25:23 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:17.972 19:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.972 19:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.544 nvme0n1 00:29:18.544 19:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:18.544 19:25:24 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.544 Running I/O for 2 seconds... 00:29:20.456 00:29:20.456 Latency(us) 00:29:20.456 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.456 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:20.456 nvme0n1 : 2.01 21341.95 83.37 0.00 0.00 5986.05 5488.64 12670.29 00:29:20.456 =================================================================================================================== 00:29:20.456 Total : 21341.95 83.37 0.00 0.00 5986.05 5488.64 12670.29 00:29:20.456 0 00:29:20.456 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:20.456 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:20.456 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:20.456 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:20.456 | select(.opcode=="crc32c") 00:29:20.456 | "\(.module_name) \(.executed)"' 00:29:20.456 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1603587 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1603587 ']' 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1603587 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1603587 00:29:20.717 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1603587' 00:29:20.718 killing process with pid 1603587 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1603587 00:29:20.718 Received shutdown signal, test time was about 2.000000 seconds 00:29:20.718 00:29:20.718 Latency(us) 00:29:20.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.718 =================================================================================================================== 00:29:20.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1603587 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1604277 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1604277 /var/tmp/bperf.sock 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1604277 ']' 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:20.718 19:25:26 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.979 [2024-07-12 19:25:26.873144] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:20.979 [2024-07-12 19:25:26.873198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1604277 ] 00:29:20.979 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.979 Zero copy mechanism will not be used. 00:29:20.979 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.979 [2024-07-12 19:25:26.948250] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.979 [2024-07-12 19:25:27.001377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.616 19:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:21.616 19:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:21.616 19:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:21.616 19:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:21.616 19:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:21.877 19:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.877 19:25:27 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.138 nvme0n1 00:29:22.138 19:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:22.138 19:25:28 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.138 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.138 Zero copy mechanism will not be used. 00:29:22.138 Running I/O for 2 seconds... 00:29:24.683 00:29:24.683 Latency(us) 00:29:24.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.683 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:24.683 nvme0n1 : 2.01 3951.09 493.89 0.00 0.00 4041.72 2034.35 20534.61 00:29:24.683 =================================================================================================================== 00:29:24.683 Total : 3951.09 493.89 0.00 0.00 4041.72 2034.35 20534.61 00:29:24.683 0 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:24.683 | select(.opcode=="crc32c") 00:29:24.683 | "\(.module_name) \(.executed)"' 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1604277 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1604277 ']' 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1604277 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1604277 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1604277' 00:29:24.683 killing process with pid 1604277 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1604277 00:29:24.683 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.683 00:29:24.683 Latency(us) 00:29:24.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.683 =================================================================================================================== 00:29:24.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1604277 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1601874 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1601874 ']' 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1601874 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1601874 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1601874' 00:29:24.683 killing process with pid 1601874 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1601874 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1601874 00:29:24.683 00:29:24.683 real 0m16.183s 00:29:24.683 user 0m31.839s 00:29:24.683 sys 0m3.213s 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.683 ************************************ 00:29:24.683 END TEST nvmf_digest_clean 00:29:24.683 ************************************ 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:24.683 19:25:30 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:24.944 ************************************ 00:29:24.944 START TEST nvmf_digest_error 00:29:24.944 ************************************ 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1605001 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1605001 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1605001 ']' 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:24.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:24.944 19:25:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:24.944 [2024-07-12 19:25:30.894231] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:24.944 [2024-07-12 19:25:30.894284] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:24.944 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.944 [2024-07-12 19:25:30.960516] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.944 [2024-07-12 19:25:31.026117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.944 [2024-07-12 19:25:31.026158] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.944 [2024-07-12 19:25:31.026165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.944 [2024-07-12 19:25:31.026172] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.944 [2024-07-12 19:25:31.026181] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.944 [2024-07-12 19:25:31.026209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.884 [2024-07-12 19:25:31.712179] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.884 null0 00:29:25.884 [2024-07-12 19:25:31.792695] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.884 [2024-07-12 19:25:31.816899] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1605339 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1605339 /var/tmp/bperf.sock 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1605339 ']' 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:25.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:25.884 19:25:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:25.884 [2024-07-12 19:25:31.873142] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:25.884 [2024-07-12 19:25:31.873190] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1605339 ] 00:29:25.884 EAL: No free 2048 kB hugepages reported on node 1 00:29:25.884 [2024-07-12 19:25:31.947795] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.884 [2024-07-12 19:25:32.001239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.505 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:26.505 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:26.505 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:26.505 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:26.764 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:26.764 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:26.764 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.764 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:26.764 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.764 19:25:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.023 nvme0n1 00:29:27.023 19:25:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:27.023 19:25:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.023 19:25:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.023 19:25:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.284 19:25:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:27.284 19:25:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:27.284 Running I/O for 2 seconds... 00:29:27.284 [2024-07-12 19:25:33.254076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.254106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.254115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.267655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.267676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.267684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.280624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.280645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.280652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.292706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.292730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.292737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.305879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.305898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:6372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.305904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.317203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.317221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:2801 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.317228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.328848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.328866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.328872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.341035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.341053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.341060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.353994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.354012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.354018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.365892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.365909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.365916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.378173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.378190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:24514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.378196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.390709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.390726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.390733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.284 [2024-07-12 19:25:33.404047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.284 [2024-07-12 19:25:33.404065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.284 [2024-07-12 19:25:33.404071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.416676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.416694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.416701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.428065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.428082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.428088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.441261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.441279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.441285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.453076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.453094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.453100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.464974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.464991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21718 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.464997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.477087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.477104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.477110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.489192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.489210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.489216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.502011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.502028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.502038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.514536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.514553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.514559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.527148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.527165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:10315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.527171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.537429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.537446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.537453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.550894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.550912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.550918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.562056] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.562073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.562080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.575264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.575281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:11644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.575288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.587463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.587481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.587487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.599957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.599975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.599981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.611042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.611059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9204 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.611066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.624198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.624216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.624222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.637684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.637702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.637709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.649224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.649241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.649247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.659762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.659779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.545 [2024-07-12 19:25:33.659785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.545 [2024-07-12 19:25:33.673008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.545 [2024-07-12 19:25:33.673026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.546 [2024-07-12 19:25:33.673032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.685957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.685975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.685981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.697793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.697811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.697817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.710251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.710268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:52 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.710278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.722850] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.722867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.722873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.734654] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.734671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.734678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.746646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.746663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.746670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.758242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.758259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.758266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.772076] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.772094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.772100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.783806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.783823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18568 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.783830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.796547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.796564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.796571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.808858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.808876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.808882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.821595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.821615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.821622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.833981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.833999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.834005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.845705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.845722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.845728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.858250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.858268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.858274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.870832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.870850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.870856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.883034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.883052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.883059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.894047] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.894065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.894071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.906887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.906904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.906911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.919115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.919134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.919141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.807 [2024-07-12 19:25:33.931633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:27.807 [2024-07-12 19:25:33.931650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.807 [2024-07-12 19:25:33.931656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:33.945421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:33.945438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:33.945444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:33.957051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:33.957068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:33.957074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:33.969441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:33.969458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:33.969464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:33.979937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:33.979953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:33.979960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:33.993222] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:33.993239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:8425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:33.993246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.005928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.005944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.005950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.017641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.017657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.017664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.029972] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.029989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.029998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.042724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.042741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.042748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.055137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.055154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.055160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.066546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.066563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:13352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.066570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.078077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.078094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.078101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.092117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.092137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.092143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.102754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.102771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.102777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.115622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.115639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.115645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.127369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.127386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.127392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.068 [2024-07-12 19:25:34.139537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.068 [2024-07-12 19:25:34.139557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:19071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.068 [2024-07-12 19:25:34.139563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.069 [2024-07-12 19:25:34.152249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.069 [2024-07-12 19:25:34.152267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:8098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.069 [2024-07-12 19:25:34.152273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.069 [2024-07-12 19:25:34.164414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.069 [2024-07-12 19:25:34.164431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.069 [2024-07-12 19:25:34.164438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.069 [2024-07-12 19:25:34.176381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.069 [2024-07-12 19:25:34.176397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.069 [2024-07-12 19:25:34.176404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.069 [2024-07-12 19:25:34.188718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.069 [2024-07-12 19:25:34.188735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.069 [2024-07-12 19:25:34.188741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.330 [2024-07-12 19:25:34.200920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.330 [2024-07-12 19:25:34.200937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.330 [2024-07-12 19:25:34.200943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.330 [2024-07-12 19:25:34.214447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.214464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.214470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.227050] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.227067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:15371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.227073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.239074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.239090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.239099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.251307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.251324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:1903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.251331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.261692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.261709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20047 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.261715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.274961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.274978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.274985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.287977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.287994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.288001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.299910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.299927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.299934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.312597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.312615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.312623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.324753] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.324770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.324776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.337188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.337205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.337212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.348903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.348925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.348932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.360292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.360309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.360316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.373442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.373460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.373466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.385885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.385904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.385910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.397616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.397634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.397641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.410793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.410811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.410817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.422501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.422519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.422525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.434587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.434605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.434612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.447090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.447107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:9477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.447115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.331 [2024-07-12 19:25:34.459611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.331 [2024-07-12 19:25:34.459629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.331 [2024-07-12 19:25:34.459635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.472240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.472258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.472264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.483727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.483744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:1875 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.483750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.496087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.496104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.496111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.508806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.508824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.508830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.519960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.519977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.519984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.533300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.533318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:13608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.533325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.545002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.545019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:24229 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.545026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.557461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.557479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.557489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.569340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.569358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.569364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.581553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.581570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.581577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.593277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.593295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.593301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.606743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.592 [2024-07-12 19:25:34.606760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.592 [2024-07-12 19:25:34.606767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.592 [2024-07-12 19:25:34.619560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.619578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.619584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.593 [2024-07-12 19:25:34.631172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.631189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.631196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.593 [2024-07-12 19:25:34.644233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.644250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.644256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.593 [2024-07-12 19:25:34.654759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.654777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.654783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.593 [2024-07-12 19:25:34.667275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.667296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.667303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.593 [2024-07-12 19:25:34.681043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.681061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.681067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.593 [2024-07-12 19:25:34.692405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.692423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.692429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.593 [2024-07-12 19:25:34.704709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.704727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.704733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.593 [2024-07-12 19:25:34.716369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.593 [2024-07-12 19:25:34.716386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:24724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.593 [2024-07-12 19:25:34.716393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.728084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.728101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.728107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.740223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.740240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.740246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.752421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.752438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.752445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.766316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.766335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.766345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.778250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.778268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.778274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.789451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.789468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.789475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.801670] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.801688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.801694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.814055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.814073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.814079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.826902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.826920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.826926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.838909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.838927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.838933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.851120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.851142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.851149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.863169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.863186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:6941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.863193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.875784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.875805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:19953 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.875812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.887539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.887556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.887563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.900330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.900348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.900354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.912084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.912102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.854 [2024-07-12 19:25:34.912109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.854 [2024-07-12 19:25:34.924657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.854 [2024-07-12 19:25:34.924674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.855 [2024-07-12 19:25:34.924680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.855 [2024-07-12 19:25:34.936318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.855 [2024-07-12 19:25:34.936337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.855 [2024-07-12 19:25:34.936343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.855 [2024-07-12 19:25:34.948995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.855 [2024-07-12 19:25:34.949013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.855 [2024-07-12 19:25:34.949020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.855 [2024-07-12 19:25:34.960707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.855 [2024-07-12 19:25:34.960725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.855 [2024-07-12 19:25:34.960731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.855 [2024-07-12 19:25:34.972556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:28.855 [2024-07-12 19:25:34.972574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.855 [2024-07-12 19:25:34.972581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:34.985412] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:34.985430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:24645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:34.985437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:34.997448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:34.997465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:34.997472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.009469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.009487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.009493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.021542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.021559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.021566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.034734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.034751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.034757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.046332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.046349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.046356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.058057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.058075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:2214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.058082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.069766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.069784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.069790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.082513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.082531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.082540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.094824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.094841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.094847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.107481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.107498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:2663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.107505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.119013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.119030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:25468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.119036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.132042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.132059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:1969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.132066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.144302] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.144320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.144327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.156569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.156586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:13274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.156593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.168246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.168263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.168270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.181574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.181591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.181597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.192274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.192294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.192301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.204370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.204387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.204393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.216884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.216902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:25042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.216908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 [2024-07-12 19:25:35.228436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2054920) 00:29:29.116 [2024-07-12 19:25:35.228453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.116 [2024-07-12 19:25:35.228460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.116 00:29:29.116 Latency(us) 00:29:29.116 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.116 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:29.116 nvme0n1 : 2.00 20741.00 81.02 0.00 0.00 6164.86 3549.87 18568.53 00:29:29.116 =================================================================================================================== 00:29:29.116 Total : 20741.00 81.02 0.00 0.00 6164.86 3549.87 18568.53 00:29:29.116 0 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:29.377 | .driver_specific 00:29:29.377 | .nvme_error 00:29:29.377 | .status_code 00:29:29.377 | .command_transient_transport_error' 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1605339 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1605339 ']' 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1605339 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1605339 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1605339' 00:29:29.377 killing process with pid 1605339 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1605339 00:29:29.377 Received shutdown signal, test time was about 2.000000 seconds 00:29:29.377 00:29:29.377 Latency(us) 00:29:29.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.377 =================================================================================================================== 00:29:29.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:29.377 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1605339 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1606024 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1606024 /var/tmp/bperf.sock 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1606024 ']' 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:29.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:29.638 19:25:35 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:29.638 [2024-07-12 19:25:35.639078] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:29.638 [2024-07-12 19:25:35.639139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606024 ] 00:29:29.638 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:29.638 Zero copy mechanism will not be used. 00:29:29.638 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.638 [2024-07-12 19:25:35.711810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.638 [2024-07-12 19:25:35.765434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.577 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.838 nvme0n1 00:29:30.838 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:30.838 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:30.838 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.838 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:30.838 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:30.838 19:25:36 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:31.098 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:31.098 Zero copy mechanism will not be used. 00:29:31.098 Running I/O for 2 seconds... 00:29:31.098 [2024-07-12 19:25:36.997358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.098 [2024-07-12 19:25:36.997390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.098 [2024-07-12 19:25:36.997399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.098 [2024-07-12 19:25:37.009468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.098 [2024-07-12 19:25:37.009489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.098 [2024-07-12 19:25:37.009497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.098 [2024-07-12 19:25:37.019974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.098 [2024-07-12 19:25:37.019993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.020000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.033090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.033109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.033116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.044540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.044558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.044565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.057032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.057050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.057061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.070014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.070032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.070039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.082166] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.082184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.082190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.094795] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.094813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.094819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.107066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.107085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.107091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.118978] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.118996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.119002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.130196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.130215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.130221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.142089] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.142106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.142113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.153402] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.153420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.153426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.164686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.164708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.164715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.176415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.176433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.176439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.189907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.189925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.189931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.204125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.204142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.204148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.099 [2024-07-12 19:25:37.215439] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.099 [2024-07-12 19:25:37.215456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.099 [2024-07-12 19:25:37.215462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.229458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.229476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.229482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.242372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.242389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.242395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.253390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.253408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.253414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.264872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.264890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.264896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.277958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.277975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.277982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.290499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.290516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.290523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.303589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.303606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.303612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.316311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.316328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.316335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.327788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.327806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.360 [2024-07-12 19:25:37.327812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.360 [2024-07-12 19:25:37.340434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.360 [2024-07-12 19:25:37.340450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.340457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.352603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.352620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.352627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.364759] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.364777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.364783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.377704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.377722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.377731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.390326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.390344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.390350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.402674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.402691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.402698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.415896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.415914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.415920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.428923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.428940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.428947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.442021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.442038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.442044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.454657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.454675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.454681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.467744] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.467762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.467768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.361 [2024-07-12 19:25:37.480611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.361 [2024-07-12 19:25:37.480629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.361 [2024-07-12 19:25:37.480635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.623 [2024-07-12 19:25:37.491953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.623 [2024-07-12 19:25:37.491971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.623 [2024-07-12 19:25:37.491977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.623 [2024-07-12 19:25:37.505401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.623 [2024-07-12 19:25:37.505419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.623 [2024-07-12 19:25:37.505425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.623 [2024-07-12 19:25:37.516246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.623 [2024-07-12 19:25:37.516264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.623 [2024-07-12 19:25:37.516270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.623 [2024-07-12 19:25:37.529261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.623 [2024-07-12 19:25:37.529278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.623 [2024-07-12 19:25:37.529284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.623 [2024-07-12 19:25:37.539992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.623 [2024-07-12 19:25:37.540010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.623 [2024-07-12 19:25:37.540016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.623 [2024-07-12 19:25:37.552493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.623 [2024-07-12 19:25:37.552511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.623 [2024-07-12 19:25:37.552517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.623 [2024-07-12 19:25:37.563738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.623 [2024-07-12 19:25:37.563755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.623 [2024-07-12 19:25:37.563762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.576869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.576887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.576893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.589191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.589208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.589218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.600516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.600533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.600539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.613661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.613679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.613685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.625465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.625483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.625490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.637261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.637279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.637285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.649310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.649327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.649334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.662802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.662820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.662826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.676720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.676738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.676745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.691818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.691835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.691841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.706493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.706514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.706520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.721119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.721141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.721147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.735711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.735729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.735735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.624 [2024-07-12 19:25:37.751435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.624 [2024-07-12 19:25:37.751452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.624 [2024-07-12 19:25:37.751458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.885 [2024-07-12 19:25:37.766052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.885 [2024-07-12 19:25:37.766070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.885 [2024-07-12 19:25:37.766076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.885 [2024-07-12 19:25:37.780580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.885 [2024-07-12 19:25:37.780598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.885 [2024-07-12 19:25:37.780604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.885 [2024-07-12 19:25:37.795671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.885 [2024-07-12 19:25:37.795689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.885 [2024-07-12 19:25:37.795695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.885 [2024-07-12 19:25:37.808115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.885 [2024-07-12 19:25:37.808137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.885 [2024-07-12 19:25:37.808144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.885 [2024-07-12 19:25:37.817685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.885 [2024-07-12 19:25:37.817704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.885 [2024-07-12 19:25:37.817710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.885 [2024-07-12 19:25:37.830733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.885 [2024-07-12 19:25:37.830752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.885 [2024-07-12 19:25:37.830758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.885 [2024-07-12 19:25:37.843763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.885 [2024-07-12 19:25:37.843781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.885 [2024-07-12 19:25:37.843788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.885 [2024-07-12 19:25:37.858011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.858029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.858036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.871769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.871788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.871794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.886549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.886568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.886574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.897159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.897176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.897182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.908841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.908859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.908866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.923316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.923335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.923341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.934982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.935001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.935010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.949063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.949082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.949088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.962283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.962302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.962309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.975162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.975180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.975186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.988642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.988660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.988667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:37.999097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:37.999115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:37.999127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.886 [2024-07-12 19:25:38.011198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:31.886 [2024-07-12 19:25:38.011216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:31.886 [2024-07-12 19:25:38.011223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.147 [2024-07-12 19:25:38.019946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.019964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.019970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.031091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.031109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.031115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.041578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.041600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.041606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.051891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.051910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.051916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.061710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.061729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.061735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.073215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.073233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.073240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.086910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.086929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.086935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.097902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.097921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.097927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.107807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.107826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.107833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.118708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.118727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.118733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.130526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.130545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.130552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.141884] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.141902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.141908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.154554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.154573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.154579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.166642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.166661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.166667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.179679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.179699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.179705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.190897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.190915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.190922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.203442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.203461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.203467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.216844] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.216863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.216869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.227758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.227777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.227783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.239499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.239522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.239528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.253238] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.253257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.253263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.148 [2024-07-12 19:25:38.266200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.148 [2024-07-12 19:25:38.266219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.148 [2024-07-12 19:25:38.266225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.281188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.281206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.281212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.295420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.295439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.295445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.310107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.310130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.310136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.321616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.321635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.321641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.334016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.334035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.334041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.346253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.346272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.346279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.359204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.359223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.359229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.370389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.370408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.370414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.381403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.381423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.381429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.394360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.394379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.394385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.409159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.409178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.409185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.424090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.424109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.424115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.438659] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.438678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.438685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.452202] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.452221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.452227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.468277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.410 [2024-07-12 19:25:38.468296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.410 [2024-07-12 19:25:38.468305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.410 [2024-07-12 19:25:38.483293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.411 [2024-07-12 19:25:38.483312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.411 [2024-07-12 19:25:38.483319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.411 [2024-07-12 19:25:38.496641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.411 [2024-07-12 19:25:38.496660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.411 [2024-07-12 19:25:38.496666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.411 [2024-07-12 19:25:38.510184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.411 [2024-07-12 19:25:38.510203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.411 [2024-07-12 19:25:38.510210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.411 [2024-07-12 19:25:38.520582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.411 [2024-07-12 19:25:38.520602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.411 [2024-07-12 19:25:38.520608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.411 [2024-07-12 19:25:38.532298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.411 [2024-07-12 19:25:38.532317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.411 [2024-07-12 19:25:38.532324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.673 [2024-07-12 19:25:38.543800] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.673 [2024-07-12 19:25:38.543820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.673 [2024-07-12 19:25:38.543826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.673 [2024-07-12 19:25:38.556300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.673 [2024-07-12 19:25:38.556319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.673 [2024-07-12 19:25:38.556326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.673 [2024-07-12 19:25:38.568458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.673 [2024-07-12 19:25:38.568477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.673 [2024-07-12 19:25:38.568483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.673 [2024-07-12 19:25:38.578510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.673 [2024-07-12 19:25:38.578532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.673 [2024-07-12 19:25:38.578539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.673 [2024-07-12 19:25:38.590683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.673 [2024-07-12 19:25:38.590702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.590708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.603810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.603828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.603835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.616298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.616317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.616323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.627697] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.627715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.627722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.639995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.640013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.640020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.650307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.650325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.650331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.662902] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.662921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.662927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.674539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.674557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.674563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.686451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.686470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.686476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.697784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.697802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.697808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.710723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.710741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.710747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.722144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.722162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.722168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.735344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.735363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.735369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.749176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.749194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.749200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.763328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.763345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.763352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.775904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.775922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.775929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.788457] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.788476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.788485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.674 [2024-07-12 19:25:38.800640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.674 [2024-07-12 19:25:38.800659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.674 [2024-07-12 19:25:38.800665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.813008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.813026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.813032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.824995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.825014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.825020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.836542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.836560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.836566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.850613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.850632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.850638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.860255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.860273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.860279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.871550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.871569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.871575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.884158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.884177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.884183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.896940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.896961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.896967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.908860] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.908879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.908885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.921547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.921565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.921571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.933573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.933592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.933598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.946362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.946380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.946386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.956987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.957005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.957012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.971111] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.971134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.971140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.936 [2024-07-12 19:25:38.982907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd72b80) 00:29:32.936 [2024-07-12 19:25:38.982925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.936 [2024-07-12 19:25:38.982931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.936 00:29:32.936 Latency(us) 00:29:32.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.936 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:32.936 nvme0n1 : 2.05 2434.80 304.35 0.00 0.00 6444.65 1140.05 48496.64 00:29:32.936 =================================================================================================================== 00:29:32.936 Total : 2434.80 304.35 0.00 0.00 6444.65 1140.05 48496.64 00:29:32.936 0 00:29:32.936 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.936 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.936 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.936 | .driver_specific 00:29:32.936 | .nvme_error 00:29:32.936 | .status_code 00:29:32.936 | .command_transient_transport_error' 00:29:32.936 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1606024 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1606024 ']' 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1606024 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1606024 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1606024' 00:29:33.198 killing process with pid 1606024 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1606024 00:29:33.198 Received shutdown signal, test time was about 2.000000 seconds 00:29:33.198 00:29:33.198 Latency(us) 00:29:33.198 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.198 =================================================================================================================== 00:29:33.198 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:33.198 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1606024 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1606709 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1606709 /var/tmp/bperf.sock 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1606709 ']' 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.459 [2024-07-12 19:25:39.404908] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:33.459 [2024-07-12 19:25:39.404950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1606709 ] 00:29:33.459 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.459 [2024-07-12 19:25:39.449296] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.459 [2024-07-12 19:25:39.502315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:33.459 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:33.720 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:33.720 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.720 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:33.720 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:33.720 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:33.720 19:25:39 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:33.981 nvme0n1 00:29:33.981 19:25:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:33.981 19:25:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:33.981 19:25:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.242 19:25:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:34.242 19:25:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:34.242 19:25:40 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.242 Running I/O for 2 seconds... 00:29:34.242 [2024-07-12 19:25:40.222737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190eb760 00:29:34.242 [2024-07-12 19:25:40.224479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:8353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.242 [2024-07-12 19:25:40.224505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:34.242 [2024-07-12 19:25:40.233061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7970 00:29:34.242 [2024-07-12 19:25:40.234141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.242 [2024-07-12 19:25:40.234160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:34.242 [2024-07-12 19:25:40.246407] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190eee38 00:29:34.243 [2024-07-12 19:25:40.248151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.248168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.257050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f35f0 00:29:34.243 [2024-07-12 19:25:40.258350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.258366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.269012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7da8 00:29:34.243 [2024-07-12 19:25:40.270194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.270211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.280717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f3e60 00:29:34.243 [2024-07-12 19:25:40.281933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.281949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.292465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f3e60 00:29:34.243 [2024-07-12 19:25:40.293672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:7205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.293689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.304234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f3e60 00:29:34.243 [2024-07-12 19:25:40.305451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:25138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.305467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.315186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7538 00:29:34.243 [2024-07-12 19:25:40.316418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:13318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.316434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.329188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e0a68 00:29:34.243 [2024-07-12 19:25:40.331026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:25211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.331043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.339338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e01f8 00:29:34.243 [2024-07-12 19:25:40.340519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.340536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.351034] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f6cc8 00:29:34.243 [2024-07-12 19:25:40.352101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.352117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:34.243 [2024-07-12 19:25:40.362772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f6cc8 00:29:34.243 [2024-07-12 19:25:40.363950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.243 [2024-07-12 19:25:40.363966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.374493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f6cc8 00:29:34.504 [2024-07-12 19:25:40.375675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.375691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.386432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190df988 00:29:34.504 [2024-07-12 19:25:40.387606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.387623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.397419] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ddc00 00:29:34.504 [2024-07-12 19:25:40.398575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.398591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.411979] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e38d0 00:29:34.504 [2024-07-12 19:25:40.413948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.413964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.423744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e9e10 00:29:34.504 [2024-07-12 19:25:40.425702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.425718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.434013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e27f0 00:29:34.504 [2024-07-12 19:25:40.435238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.435253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.444981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f20d8 00:29:34.504 [2024-07-12 19:25:40.446257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.446276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.456664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e1710 00:29:34.504 [2024-07-12 19:25:40.457961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.457977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.469120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e1710 00:29:34.504 [2024-07-12 19:25:40.470422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:2675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.470438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.480061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f1868 00:29:34.504 [2024-07-12 19:25:40.481412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.481428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.494059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f1868 00:29:34.504 [2024-07-12 19:25:40.495987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.496002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.504309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fcdd0 00:29:34.504 [2024-07-12 19:25:40.505592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:10059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.505607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.516026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190eaef0 00:29:34.504 [2024-07-12 19:25:40.517246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.504 [2024-07-12 19:25:40.517262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:34.504 [2024-07-12 19:25:40.529192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190feb58 00:29:34.505 [2024-07-12 19:25:40.531085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.531101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.539473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7100 00:29:34.505 [2024-07-12 19:25:40.540740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:8002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.540756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.551226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e88f8 00:29:34.505 [2024-07-12 19:25:40.552497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.552513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.562163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7970 00:29:34.505 [2024-07-12 19:25:40.563413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.563429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.573863] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e8088 00:29:34.505 [2024-07-12 19:25:40.575099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.575115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.586386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e1b48 00:29:34.505 [2024-07-12 19:25:40.587584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.587599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.598087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7100 00:29:34.505 [2024-07-12 19:25:40.599248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.599263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.609800] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e1b48 00:29:34.505 [2024-07-12 19:25:40.611018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.611034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.621522] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ecc78 00:29:34.505 [2024-07-12 19:25:40.622710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.505 [2024-07-12 19:25:40.622726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:34.505 [2024-07-12 19:25:40.632647] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190efae0 00:29:34.766 [2024-07-12 19:25:40.633888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.766 [2024-07-12 19:25:40.633904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:34.766 [2024-07-12 19:25:40.645462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f3e60 00:29:34.766 [2024-07-12 19:25:40.646832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.766 [2024-07-12 19:25:40.646848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:34.766 [2024-07-12 19:25:40.657200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f3e60 00:29:34.766 [2024-07-12 19:25:40.658567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.766 [2024-07-12 19:25:40.658583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.670352] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fa3a0 00:29:34.767 [2024-07-12 19:25:40.672450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.672466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.679501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fac10 00:29:34.767 [2024-07-12 19:25:40.680450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:13236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.680466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.692018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e27f0 00:29:34.767 [2024-07-12 19:25:40.693521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.693536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.702267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190df550 00:29:34.767 [2024-07-12 19:25:40.703127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.703143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.713403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e4140 00:29:34.767 [2024-07-12 19:25:40.714249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:18552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.714264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.724883] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e38d0 00:29:34.767 [2024-07-12 19:25:40.725732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:11018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.725747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.737405] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f57b0 00:29:34.767 [2024-07-12 19:25:40.738234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.738249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.750646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ddc00 00:29:34.767 [2024-07-12 19:25:40.752164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.752182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.760885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f57b0 00:29:34.767 [2024-07-12 19:25:40.761739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.761754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.774117] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190dfdc0 00:29:34.767 [2024-07-12 19:25:40.775582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:14496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.775598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.784296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f35f0 00:29:34.767 [2024-07-12 19:25:40.785138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.785154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.796003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f35f0 00:29:34.767 [2024-07-12 19:25:40.796848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.796864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.807681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e38d0 00:29:34.767 [2024-07-12 19:25:40.808515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:19090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.808531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.820856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e38d0 00:29:34.767 [2024-07-12 19:25:40.822228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:24323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.822244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.830291] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f2d80 00:29:34.767 [2024-07-12 19:25:40.831104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:22775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.831119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.844299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ddc00 00:29:34.767 [2024-07-12 19:25:40.845771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.845786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.854538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ddc00 00:29:34.767 [2024-07-12 19:25:40.855386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.855402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.865475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f2d80 00:29:34.767 [2024-07-12 19:25:40.866227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.866242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.877935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f2d80 00:29:34.767 [2024-07-12 19:25:40.878761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.878776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:34.767 [2024-07-12 19:25:40.891911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f4298 00:29:34.767 [2024-07-12 19:25:40.893399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:34.767 [2024-07-12 19:25:40.893415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.905205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f2d80 00:29:35.029 [2024-07-12 19:25:40.907295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.907311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.916064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f81e0 00:29:35.029 [2024-07-12 19:25:40.917800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.917815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.925490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e4578 00:29:35.029 [2024-07-12 19:25:40.926575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.926591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.939460] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fef90 00:29:35.029 [2024-07-12 19:25:40.941195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:9807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.941210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.948873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f96f8 00:29:35.029 [2024-07-12 19:25:40.949957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.949973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.962437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e0ea0 00:29:35.029 [2024-07-12 19:25:40.964164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.964180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.973021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e27f0 00:29:35.029 [2024-07-12 19:25:40.974240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:3888 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.974256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.984885] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f9b30 00:29:35.029 [2024-07-12 19:25:40.986121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.986139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:40.996590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f9b30 00:29:35.029 [2024-07-12 19:25:40.997828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:40.997844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.008347] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f9b30 00:29:35.029 [2024-07-12 19:25:41.009584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:24065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.009600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.020061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f9b30 00:29:35.029 [2024-07-12 19:25:41.021245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.021260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.033256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e27f0 00:29:35.029 [2024-07-12 19:25:41.035127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.035142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.041403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e0ea0 00:29:35.029 [2024-07-12 19:25:41.042283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.042298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.055420] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f0bc0 00:29:35.029 [2024-07-12 19:25:41.056927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.056945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.064872] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f6458 00:29:35.029 [2024-07-12 19:25:41.065733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.065748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.079412] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e99d8 00:29:35.029 [2024-07-12 19:25:41.081080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.081096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.089631] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ea248 00:29:35.029 [2024-07-12 19:25:41.090669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.090684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.100585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e84c0 00:29:35.029 [2024-07-12 19:25:41.101611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.101627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.114626] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f1868 00:29:35.029 [2024-07-12 19:25:41.116223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.116239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.124072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fbcf0 00:29:35.029 [2024-07-12 19:25:41.125095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.125110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.135779] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f5378 00:29:35.029 [2024-07-12 19:25:41.136791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.136807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:35.029 [2024-07-12 19:25:41.148246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e84c0 00:29:35.029 [2024-07-12 19:25:41.149261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:11933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.029 [2024-07-12 19:25:41.149277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.161443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e84c0 00:29:35.291 [2024-07-12 19:25:41.163099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:14288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.163115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.170903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fbcf0 00:29:35.291 [2024-07-12 19:25:41.171910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.171926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.183383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fc128 00:29:35.291 [2024-07-12 19:25:41.184398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:6270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.184414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.195120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fc128 00:29:35.291 [2024-07-12 19:25:41.196152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.196168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.206842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fc128 00:29:35.291 [2024-07-12 19:25:41.207815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:25587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.207830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.218523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e1b48 00:29:35.291 [2024-07-12 19:25:41.219518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.219533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.231706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e1b48 00:29:35.291 [2024-07-12 19:25:41.233375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.233390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.242305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e5a90 00:29:35.291 [2024-07-12 19:25:41.243465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.243481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.255711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f0ff8 00:29:35.291 [2024-07-12 19:25:41.257508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.257524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.266295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190dece0 00:29:35.291 [2024-07-12 19:25:41.267611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.267626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.278155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e23b8 00:29:35.291 [2024-07-12 19:25:41.279435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:24790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.279450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.289271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7538 00:29:35.291 [2024-07-12 19:25:41.290576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.290592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.301373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ef270 00:29:35.291 [2024-07-12 19:25:41.302179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:14347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.302195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.314092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7970 00:29:35.291 [2024-07-12 19:25:41.315842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.315857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.323559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f0ff8 00:29:35.291 [2024-07-12 19:25:41.324643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.324658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.335241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fac10 00:29:35.291 [2024-07-12 19:25:41.336223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.336238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.347742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f8a50 00:29:35.291 [2024-07-12 19:25:41.348838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.348854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.359423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190de038 00:29:35.291 [2024-07-12 19:25:41.360511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.360529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.372607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190de038 00:29:35.291 [2024-07-12 19:25:41.374346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.374362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.382869] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f1430 00:29:35.291 [2024-07-12 19:25:41.383956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.291 [2024-07-12 19:25:41.383972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:35.291 [2024-07-12 19:25:41.394772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fac10 00:29:35.292 [2024-07-12 19:25:41.395854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.292 [2024-07-12 19:25:41.395871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:35.292 [2024-07-12 19:25:41.406492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fac10 00:29:35.292 [2024-07-12 19:25:41.407567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.292 [2024-07-12 19:25:41.407584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:35.292 [2024-07-12 19:25:41.418217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fac10 00:29:35.292 [2024-07-12 19:25:41.419244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.292 [2024-07-12 19:25:41.419259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.431469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f1430 00:29:35.554 [2024-07-12 19:25:41.433182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.433197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.442070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f6458 00:29:35.554 [2024-07-12 19:25:41.443240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.443257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.453616] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f81e0 00:29:35.554 [2024-07-12 19:25:41.454842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:20407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.454858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.466416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f2510 00:29:35.554 [2024-07-12 19:25:41.467968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:7739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.467985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.477951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190de470 00:29:35.554 [2024-07-12 19:25:41.479464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.479479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.487753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f6020 00:29:35.554 [2024-07-12 19:25:41.488786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.488802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.500257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f0bc0 00:29:35.554 [2024-07-12 19:25:41.501246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.501262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.511146] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7100 00:29:35.554 [2024-07-12 19:25:41.512146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.512162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.523659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f1430 00:29:35.554 [2024-07-12 19:25:41.524660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.524675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.535373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e0630 00:29:35.554 [2024-07-12 19:25:41.536399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.536416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.547100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e0630 00:29:35.554 [2024-07-12 19:25:41.548106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:24221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.548125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.558808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e0630 00:29:35.554 [2024-07-12 19:25:41.559813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:23522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.559828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.572024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e0630 00:29:35.554 [2024-07-12 19:25:41.573673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.573689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.583720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f1430 00:29:35.554 [2024-07-12 19:25:41.585388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:7447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.585404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.593171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190de470 00:29:35.554 [2024-07-12 19:25:41.594152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.594168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.607206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e12d8 00:29:35.554 [2024-07-12 19:25:41.608837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.608853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.616670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7100 00:29:35.554 [2024-07-12 19:25:41.617656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.617671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.631249] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e49b0 00:29:35.554 [2024-07-12 19:25:41.633010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.633027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.641452] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fa3a0 00:29:35.554 [2024-07-12 19:25:41.642564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.642580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.653129] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e99d8 00:29:35.554 [2024-07-12 19:25:41.654260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.654275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.664775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fa3a0 00:29:35.554 [2024-07-12 19:25:41.665904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.665923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:35.554 [2024-07-12 19:25:41.676495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fa3a0 00:29:35.554 [2024-07-12 19:25:41.677613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.554 [2024-07-12 19:25:41.677629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.688182] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190dece0 00:29:35.816 [2024-07-12 19:25:41.689244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.689260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.699105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e3d08 00:29:35.816 [2024-07-12 19:25:41.700211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.700227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.711583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e3d08 00:29:35.816 [2024-07-12 19:25:41.712689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:14837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.712705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.723269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e3d08 00:29:35.816 [2024-07-12 19:25:41.724378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.724395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.734993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f35f0 00:29:35.816 [2024-07-12 19:25:41.736102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.736118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.746708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f0ff8 00:29:35.816 [2024-07-12 19:25:41.747806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:19727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.747822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.757640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ebfd0 00:29:35.816 [2024-07-12 19:25:41.758726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.758742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.770159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e95a0 00:29:35.816 [2024-07-12 19:25:41.771227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.771249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.783387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190eff18 00:29:35.816 [2024-07-12 19:25:41.785110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:10240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.785130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.792851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ec408 00:29:35.816 [2024-07-12 19:25:41.793896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.793912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.805320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ec408 00:29:35.816 [2024-07-12 19:25:41.806402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.806418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.816982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e9168 00:29:35.816 [2024-07-12 19:25:41.818050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.818065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.828756] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e9168 00:29:35.816 [2024-07-12 19:25:41.829821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.829837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.840490] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e9168 00:29:35.816 [2024-07-12 19:25:41.841557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:13104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.841572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.852169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ecc78 00:29:35.816 [2024-07-12 19:25:41.853253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:2699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.853269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.865379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ecc78 00:29:35.816 [2024-07-12 19:25:41.867073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.867089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.874846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ea248 00:29:35.816 [2024-07-12 19:25:41.875895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.875911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.887354] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e8088 00:29:35.816 [2024-07-12 19:25:41.888405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.888421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.898321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190de470 00:29:35.816 [2024-07-12 19:25:41.899378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22085 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.899394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.910828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e88f8 00:29:35.816 [2024-07-12 19:25:41.911861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:24961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.911878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.922542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e88f8 00:29:35.816 [2024-07-12 19:25:41.923568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:6503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.923583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.816 [2024-07-12 19:25:41.934243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e88f8 00:29:35.816 [2024-07-12 19:25:41.935263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.816 [2024-07-12 19:25:41.935279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:41.945963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e88f8 00:29:36.078 [2024-07-12 19:25:41.946954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:41.946970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:41.957677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190de470 00:29:36.078 [2024-07-12 19:25:41.958688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:21346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:41.958704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:41.969430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f7970 00:29:36.078 [2024-07-12 19:25:41.970464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:41.970480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:41.980338] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190eaab8 00:29:36.078 [2024-07-12 19:25:41.981376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:13107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:41.981392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:41.994901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e6300 00:29:36.078 [2024-07-12 19:25:41.996720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:41.996735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.005525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190de8a8 00:29:36.078 [2024-07-12 19:25:42.006865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.006880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.018965] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ff3c8 00:29:36.078 [2024-07-12 19:25:42.020949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.020967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.029226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fb8b8 00:29:36.078 [2024-07-12 19:25:42.030570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:18303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.030587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.040920] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e88f8 00:29:36.078 [2024-07-12 19:25:42.042229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19653 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.042245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.052645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190fb8b8 00:29:36.078 [2024-07-12 19:25:42.053957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.053972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.064402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190ed4e8 00:29:36.078 [2024-07-12 19:25:42.065737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.065753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.075383] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190df550 00:29:36.078 [2024-07-12 19:25:42.076706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.076724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.087901] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e8088 00:29:36.078 [2024-07-12 19:25:42.089191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.089206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.099651] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e8088 00:29:36.078 [2024-07-12 19:25:42.100978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:23962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.100994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.111372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e8088 00:29:36.078 [2024-07-12 19:25:42.112697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:19587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.112713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.122318] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f4b08 00:29:36.078 [2024-07-12 19:25:42.123628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.123643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.136322] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e8d30 00:29:36.078 [2024-07-12 19:25:42.138243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.138258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.146519] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e01f8 00:29:36.078 [2024-07-12 19:25:42.147832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:13671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.147849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.158219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e01f8 00:29:36.078 [2024-07-12 19:25:42.159520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:2551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.159536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.171389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190e01f8 00:29:36.078 [2024-07-12 19:25:42.173374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:10611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.173389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.181630] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f4b08 00:29:36.078 [2024-07-12 19:25:42.182940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.182956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.193569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f20d8 00:29:36.078 [2024-07-12 19:25:42.195156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:14382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.195171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:36.078 [2024-07-12 19:25:42.203822] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099730) with pdu=0x2000190f8a50 00:29:36.078 [2024-07-12 19:25:42.204754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.078 [2024-07-12 19:25:42.204770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:36.340 00:29:36.340 Latency(us) 00:29:36.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.340 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.340 nvme0n1 : 2.00 21673.31 84.66 0.00 0.00 5897.53 2129.92 16820.91 00:29:36.340 =================================================================================================================== 00:29:36.340 Total : 21673.31 84.66 0.00 0.00 5897.53 2129.92 16820.91 00:29:36.340 0 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:36.340 | .driver_specific 00:29:36.340 | .nvme_error 00:29:36.340 | .status_code 00:29:36.340 | .command_transient_transport_error' 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1606709 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1606709 ']' 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1606709 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1606709 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1606709' 00:29:36.340 killing process with pid 1606709 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1606709 00:29:36.340 Received shutdown signal, test time was about 2.000000 seconds 00:29:36.340 00:29:36.340 Latency(us) 00:29:36.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.340 =================================================================================================================== 00:29:36.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:36.340 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1606709 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1607383 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1607383 /var/tmp/bperf.sock 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1607383 ']' 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:36.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:36.601 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.601 [2024-07-12 19:25:42.585019] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:36.601 [2024-07-12 19:25:42.585073] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1607383 ] 00:29:36.601 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:36.601 Zero copy mechanism will not be used. 00:29:36.601 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.601 [2024-07-12 19:25:42.637234] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:36.601 [2024-07-12 19:25:42.690092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:36.869 19:25:42 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:37.443 nvme0n1 00:29:37.443 19:25:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:37.443 19:25:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:37.443 19:25:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:37.443 19:25:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:37.443 19:25:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:37.443 19:25:43 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:37.443 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:37.443 Zero copy mechanism will not be used. 00:29:37.443 Running I/O for 2 seconds... 00:29:37.443 [2024-07-12 19:25:43.395989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.396389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.396416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.409812] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.410071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.410091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.422042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.422284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.422302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.432141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.432465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.432483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.442913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.443256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.443274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.452241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.452572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.452589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.462455] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.462870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.462891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.473392] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.473736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.473754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.483093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.483450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.483468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.492509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.492738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.443 [2024-07-12 19:25:43.492755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.443 [2024-07-12 19:25:43.502804] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.443 [2024-07-12 19:25:43.503169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-07-12 19:25:43.503186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.444 [2024-07-12 19:25:43.513020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.444 [2024-07-12 19:25:43.513340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-07-12 19:25:43.513358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.444 [2024-07-12 19:25:43.520477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.444 [2024-07-12 19:25:43.520811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-07-12 19:25:43.520828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.444 [2024-07-12 19:25:43.527828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.444 [2024-07-12 19:25:43.527961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-07-12 19:25:43.527977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.444 [2024-07-12 19:25:43.537192] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.444 [2024-07-12 19:25:43.537532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-07-12 19:25:43.537550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.444 [2024-07-12 19:25:43.544185] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.444 [2024-07-12 19:25:43.544299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-07-12 19:25:43.544314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.444 [2024-07-12 19:25:43.554209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.444 [2024-07-12 19:25:43.554517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-07-12 19:25:43.554535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.444 [2024-07-12 19:25:43.564856] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.444 [2024-07-12 19:25:43.564969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.444 [2024-07-12 19:25:43.564984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.574413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.574746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.574763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.582816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.583107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.583129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.593043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.593273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.593289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.601776] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.602101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.602118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.609952] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.610296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.610314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.619012] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.619339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.619356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.628888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.629201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.629218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.638570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.638893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.638910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.648206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.648326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.648341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.657216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.657440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.657457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.666943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.667296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.667313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.677532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.677831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.677849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.689147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.689497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.689513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.700583] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.700883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.700900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.709801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.710151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.710171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.719969] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.720332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.720349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.730589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.730814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.730830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.740610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.740923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.740940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.748545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.748771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.748787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.756339] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.706 [2024-07-12 19:25:43.756563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.706 [2024-07-12 19:25:43.756580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.706 [2024-07-12 19:25:43.764411] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.764635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.764651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.772527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.772864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.772881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.780994] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.781202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.781217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.789092] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.789348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.789365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.797321] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.797676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.797693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.806136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.806396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.806412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.813561] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.813891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.813908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.820961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.821139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.821155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.827075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.827441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.827458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.707 [2024-07-12 19:25:43.833453] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.707 [2024-07-12 19:25:43.833730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.707 [2024-07-12 19:25:43.833746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.840032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.840299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.840316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.847180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.847432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.847451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.854018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.854304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.854320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.860393] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.860638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.860654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.865163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.865412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.865427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.871661] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.871898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.871914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.879320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.879548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.879564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.887427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.887803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.887820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.896274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.896600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.896617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.905704] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.906026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.906043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.915721] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.916027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.916043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.926586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.926948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.926965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.935860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.936032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.936048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.944840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.945119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.945140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.955175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.955489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.955505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.969 [2024-07-12 19:25:43.964834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.969 [2024-07-12 19:25:43.965120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.969 [2024-07-12 19:25:43.965141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:43.974958] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:43.975137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:43.975153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:43.984160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:43.984462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:43.984478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:43.994084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:43.994258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:43.994274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.003346] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.003703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.003719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.013686] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.013904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.013920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.022837] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.023142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.023159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.032025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.032427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.032444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.043665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.043881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.043897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.054096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.054549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.054565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.065388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.065636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.065653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.076409] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.076634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.076650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.085228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.085598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.085617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:37.970 [2024-07-12 19:25:44.096175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:37.970 [2024-07-12 19:25:44.096635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.970 [2024-07-12 19:25:44.096652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.107214] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.107559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.107575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.117477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.117780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.117797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.128280] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.128682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.128699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.139887] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.140314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.140331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.150466] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.150640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.150656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.160851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.161267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.161283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.170072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.170264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.170280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.179986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.180204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.180220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.190469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.190641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.190657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.199001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.199433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.199450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.209396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.209729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.209745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.220220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.220427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.220443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.227735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.227951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.227967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.233468] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.233695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.233710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.240203] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.240405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.240421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.246745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.246926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.246942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.255379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.255618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.255635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.263743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.263948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.263963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.273372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.273546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.273562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.281427] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.281601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.281617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.288381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.288733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.288750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.296041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.233 [2024-07-12 19:25:44.296297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.233 [2024-07-12 19:25:44.296314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.233 [2024-07-12 19:25:44.306629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.234 [2024-07-12 19:25:44.306802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.234 [2024-07-12 19:25:44.306818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.234 [2024-07-12 19:25:44.315679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.234 [2024-07-12 19:25:44.315963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.234 [2024-07-12 19:25:44.315979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.234 [2024-07-12 19:25:44.322829] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.234 [2024-07-12 19:25:44.323012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.234 [2024-07-12 19:25:44.323030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.234 [2024-07-12 19:25:44.332418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.234 [2024-07-12 19:25:44.332628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.234 [2024-07-12 19:25:44.332643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.234 [2024-07-12 19:25:44.341242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.234 [2024-07-12 19:25:44.341499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.234 [2024-07-12 19:25:44.341516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.234 [2024-07-12 19:25:44.350084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.234 [2024-07-12 19:25:44.350272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.234 [2024-07-12 19:25:44.350288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.234 [2024-07-12 19:25:44.359216] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.234 [2024-07-12 19:25:44.359327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.234 [2024-07-12 19:25:44.359343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.495 [2024-07-12 19:25:44.368030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.368219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.368235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.376465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.376686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.376701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.385272] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.385669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.385685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.392989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.393358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.393375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.401570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.401748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.401764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.412030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.412230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.412246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.421782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.422048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.422065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.431093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.431271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.431287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.440685] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.440971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.440987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.450283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.450534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.450551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.458402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.458613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.458629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.467484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.467656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.467672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.475858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.476132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.476148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.485189] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.485437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.485454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.492729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.493011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.493029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.500324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.500498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.500514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.506801] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.507068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.507086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.514255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.514428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.514443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.522246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.522466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.522482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.531401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.531575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.531590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.538450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.538677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.538693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.545013] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.545230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.545248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.553150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.553462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.553479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.562166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.562531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.562547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.570899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.571264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.571282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.579954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.580170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.580186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.588061] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.588448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.588465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.597155] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.597436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.597453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.496 [2024-07-12 19:25:44.605655] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.496 [2024-07-12 19:25:44.606026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.496 [2024-07-12 19:25:44.606043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.497 [2024-07-12 19:25:44.614022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.497 [2024-07-12 19:25:44.614278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.497 [2024-07-12 19:25:44.614295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.625017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.625371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.625387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.635319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.635713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.635730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.646118] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.646505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.646521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.656894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.657385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.657402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.667947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.668301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.668317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.676791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.677001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.677017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.684640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.684872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.684891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.694131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.694409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.694426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.701195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.701379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.701397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.707890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.708069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.708085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.714821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.714993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.715008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.721102] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.721447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.721463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.728894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.729085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.729100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.736769] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.736940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.759 [2024-07-12 19:25:44.736956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.759 [2024-07-12 19:25:44.745131] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.759 [2024-07-12 19:25:44.745500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.745517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.755108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.755298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.755313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.764310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.764530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.764546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.769973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.770156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.770171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.776095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.776289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.776304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.782237] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.782412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.782429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.788905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.789262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.789280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.798960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.799136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.799152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.806268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.806440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.806455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.813465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.813651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.813666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.823604] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.823949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.823966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.832241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.832528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.832545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.841151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.841316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.841331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.848475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.848852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.848868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.858342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.858587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.858604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.868300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.868638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.868654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.877133] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.877306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.877321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:38.760 [2024-07-12 19:25:44.886607] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:38.760 [2024-07-12 19:25:44.886911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:38.760 [2024-07-12 19:25:44.886928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.893398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.893628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.893646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.902229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.902680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.902697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.912055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.912459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.912479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.921487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.921666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.921682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.930390] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.930694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.930710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.937542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.937826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.937844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.946418] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.946829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.946845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.953586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.953788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.953805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.960266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.960615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.960633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.966915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.967260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.967277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.973951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.974231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.974248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.981113] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.981517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.981534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.987667] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.987855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.987870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:44.993644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.022 [2024-07-12 19:25:44.993895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.022 [2024-07-12 19:25:44.993912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.022 [2024-07-12 19:25:45.002552] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.002823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.002839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.010772] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.011128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.011145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.018932] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.019287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.019304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.025524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.025757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.025774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.032849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.033109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.033131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.041875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.042110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.042132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.050579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.050791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.050806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.059503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.059807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.059824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.067474] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.067752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.067769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.077120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.077428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.077444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.087357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.087548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.087564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.096494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.096769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.096785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.105464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.105665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.105681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.113397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.113777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.113794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.122334] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.122654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.122674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.130542] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.130716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.130732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.137281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.137537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.137553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.023 [2024-07-12 19:25:45.144303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.023 [2024-07-12 19:25:45.144533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.023 [2024-07-12 19:25:45.144550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.152906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.153287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.153304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.163171] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.163463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.163480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.171302] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.171678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.171693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.180281] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.180558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.180575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.187105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.187454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.187470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.194937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.195355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.195372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.204244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.204429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.204445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.212894] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.213104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.213120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.219896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.220092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.220107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.229436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.229647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.229663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.237735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.238045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.238061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.246132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.246520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.246537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.254093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.254356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.254373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.260766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.260943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.260962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.268811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.269061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.269078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.277219] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.277522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.277539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.287914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.288028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.288043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.297959] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.298249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.298266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.308098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.308278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.308293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.317050] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.317475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.317492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.286 [2024-07-12 19:25:45.327750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.286 [2024-07-12 19:25:45.327956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.286 [2024-07-12 19:25:45.327972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.287 [2024-07-12 19:25:45.337261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.287 [2024-07-12 19:25:45.337665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.287 [2024-07-12 19:25:45.337681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.287 [2024-07-12 19:25:45.347601] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.287 [2024-07-12 19:25:45.347777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.287 [2024-07-12 19:25:45.347793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.287 [2024-07-12 19:25:45.358319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.287 [2024-07-12 19:25:45.358673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.287 [2024-07-12 19:25:45.358689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.287 [2024-07-12 19:25:45.368398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.287 [2024-07-12 19:25:45.368746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.287 [2024-07-12 19:25:45.368762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.287 [2024-07-12 19:25:45.378540] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1099aa0) with pdu=0x2000190fef90 00:29:39.287 [2024-07-12 19:25:45.378834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.287 [2024-07-12 19:25:45.378851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.287 00:29:39.287 Latency(us) 00:29:39.287 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.287 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:39.287 nvme0n1 : 2.01 3501.20 437.65 0.00 0.00 4561.78 1925.12 15947.09 00:29:39.287 =================================================================================================================== 00:29:39.287 Total : 3501.20 437.65 0.00 0.00 4561.78 1925.12 15947.09 00:29:39.287 0 00:29:39.287 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:39.287 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:39.287 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:39.287 | .driver_specific 00:29:39.287 | .nvme_error 00:29:39.287 | .status_code 00:29:39.287 | .command_transient_transport_error' 00:29:39.287 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 226 > 0 )) 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1607383 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1607383 ']' 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1607383 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1607383 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1607383' 00:29:39.549 killing process with pid 1607383 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1607383 00:29:39.549 Received shutdown signal, test time was about 2.000000 seconds 00:29:39.549 00:29:39.549 Latency(us) 00:29:39.549 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:39.549 =================================================================================================================== 00:29:39.549 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:39.549 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1607383 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1605001 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1605001 ']' 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1605001 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1605001 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1605001' 00:29:39.812 killing process with pid 1605001 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1605001 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1605001 00:29:39.812 00:29:39.812 real 0m15.077s 00:29:39.812 user 0m29.332s 00:29:39.812 sys 0m3.041s 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:39.812 19:25:45 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.812 ************************************ 00:29:39.812 END TEST nvmf_digest_error 00:29:39.812 ************************************ 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:40.072 19:25:45 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:40.072 rmmod nvme_tcp 00:29:40.072 rmmod nvme_fabrics 00:29:40.072 rmmod nvme_keyring 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1605001 ']' 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1605001 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1605001 ']' 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1605001 00:29:40.072 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1605001) - No such process 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1605001 is not found' 00:29:40.072 Process with pid 1605001 is not found 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:40.072 19:25:46 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.984 19:25:48 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:41.984 00:29:41.984 real 0m40.765s 00:29:41.984 user 1m3.255s 00:29:41.984 sys 0m11.606s 00:29:41.984 19:25:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:41.984 19:25:48 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:41.984 ************************************ 00:29:41.984 END TEST nvmf_digest 00:29:41.984 ************************************ 00:29:42.245 19:25:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:42.245 19:25:48 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:42.245 19:25:48 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:42.245 19:25:48 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:42.245 19:25:48 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:42.245 19:25:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:42.245 19:25:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:42.245 19:25:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:42.245 ************************************ 00:29:42.245 START TEST nvmf_bdevperf 00:29:42.245 ************************************ 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:42.245 * Looking for test storage... 00:29:42.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:42.245 19:25:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:29:48.836 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:29:48.836 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:29:48.836 Found net devices under 0000:4b:00.0: cvl_0_0 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:29:48.836 Found net devices under 0000:4b:00.1: cvl_0_1 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:48.836 19:25:54 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:49.101 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:49.101 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:49.101 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:49.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:49.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.500 ms 00:29:49.101 00:29:49.101 --- 10.0.0.2 ping statistics --- 00:29:49.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.101 rtt min/avg/max/mdev = 0.500/0.500/0.500/0.000 ms 00:29:49.101 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:49.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:49.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.324 ms 00:29:49.102 00:29:49.102 --- 10.0.0.1 ping statistics --- 00:29:49.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:49.102 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1612068 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1612068 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1612068 ']' 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:49.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.102 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:49.102 [2024-07-12 19:25:55.162892] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:49.102 [2024-07-12 19:25:55.162955] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:49.102 EAL: No free 2048 kB hugepages reported on node 1 00:29:49.413 [2024-07-12 19:25:55.250689] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:49.413 [2024-07-12 19:25:55.312379] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:49.413 [2024-07-12 19:25:55.312412] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:49.413 [2024-07-12 19:25:55.312417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:49.413 [2024-07-12 19:25:55.312422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:49.413 [2024-07-12 19:25:55.312428] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:49.413 [2024-07-12 19:25:55.312533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:49.413 [2024-07-12 19:25:55.312694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:49.413 [2024-07-12 19:25:55.312696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.984 [2024-07-12 19:25:55.972744] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.984 19:25:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.984 Malloc0 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:49.984 [2024-07-12 19:25:56.037951] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:49.984 { 00:29:49.984 "params": { 00:29:49.984 "name": "Nvme$subsystem", 00:29:49.984 "trtype": "$TEST_TRANSPORT", 00:29:49.984 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:49.984 "adrfam": "ipv4", 00:29:49.984 "trsvcid": "$NVMF_PORT", 00:29:49.984 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:49.984 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:49.984 "hdgst": ${hdgst:-false}, 00:29:49.984 "ddgst": ${ddgst:-false} 00:29:49.984 }, 00:29:49.984 "method": "bdev_nvme_attach_controller" 00:29:49.984 } 00:29:49.984 EOF 00:29:49.984 )") 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:49.984 19:25:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:49.984 "params": { 00:29:49.984 "name": "Nvme1", 00:29:49.984 "trtype": "tcp", 00:29:49.984 "traddr": "10.0.0.2", 00:29:49.984 "adrfam": "ipv4", 00:29:49.984 "trsvcid": "4420", 00:29:49.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:49.985 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:49.985 "hdgst": false, 00:29:49.985 "ddgst": false 00:29:49.985 }, 00:29:49.985 "method": "bdev_nvme_attach_controller" 00:29:49.985 }' 00:29:49.985 [2024-07-12 19:25:56.091866] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:49.985 [2024-07-12 19:25:56.091912] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612321 ] 00:29:50.246 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.246 [2024-07-12 19:25:56.149332] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:50.246 [2024-07-12 19:25:56.213924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:50.506 Running I/O for 1 seconds... 00:29:51.448 00:29:51.448 Latency(us) 00:29:51.448 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:51.448 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:51.448 Verification LBA range: start 0x0 length 0x4000 00:29:51.448 Nvme1n1 : 1.01 11401.19 44.54 0.00 0.00 11174.08 2498.56 14854.83 00:29:51.448 =================================================================================================================== 00:29:51.448 Total : 11401.19 44.54 0.00 0.00 11174.08 2498.56 14854.83 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1612597 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:51.709 { 00:29:51.709 "params": { 00:29:51.709 "name": "Nvme$subsystem", 00:29:51.709 "trtype": "$TEST_TRANSPORT", 00:29:51.709 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:51.709 "adrfam": "ipv4", 00:29:51.709 "trsvcid": "$NVMF_PORT", 00:29:51.709 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:51.709 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:51.709 "hdgst": ${hdgst:-false}, 00:29:51.709 "ddgst": ${ddgst:-false} 00:29:51.709 }, 00:29:51.709 "method": "bdev_nvme_attach_controller" 00:29:51.709 } 00:29:51.709 EOF 00:29:51.709 )") 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:51.709 19:25:57 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:51.709 "params": { 00:29:51.709 "name": "Nvme1", 00:29:51.709 "trtype": "tcp", 00:29:51.709 "traddr": "10.0.0.2", 00:29:51.709 "adrfam": "ipv4", 00:29:51.709 "trsvcid": "4420", 00:29:51.709 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:51.709 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:51.709 "hdgst": false, 00:29:51.709 "ddgst": false 00:29:51.709 }, 00:29:51.709 "method": "bdev_nvme_attach_controller" 00:29:51.709 }' 00:29:51.709 [2024-07-12 19:25:57.715720] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:51.709 [2024-07-12 19:25:57.715777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1612597 ] 00:29:51.709 EAL: No free 2048 kB hugepages reported on node 1 00:29:51.709 [2024-07-12 19:25:57.773159] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.709 [2024-07-12 19:25:57.836483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:51.969 Running I/O for 15 seconds... 00:29:55.276 19:26:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1612068 00:29:55.276 19:26:00 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:55.276 [2024-07-12 19:26:00.681329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.681989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.276 [2024-07-12 19:26:00.681996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.276 [2024-07-12 19:26:00.682005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:103048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:103056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:103064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:103072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:103080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:103088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:103096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:103104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:103112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:103120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:103128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:103136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:103144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:103152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:103160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:103176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:103192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:103200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:103208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:103216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:103224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:103232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:103248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:103256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:103264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:103280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:103288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:103312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:103320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:103328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:103336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:103344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:103352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:103360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:103368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.277 [2024-07-12 19:26:00.682703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:103384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.277 [2024-07-12 19:26:00.682711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:103392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:103400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:103408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:103416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:103424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:103432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:103464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:103472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:103480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:103488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:103504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:103512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.682987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:103520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.682994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:103528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:103544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.278 [2024-07-12 19:26:00.683060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.278 [2024-07-12 19:26:00.683076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.278 [2024-07-12 19:26:00.683092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.278 [2024-07-12 19:26:00.683110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:103792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.278 [2024-07-12 19:26:00.683208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:103800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.278 [2024-07-12 19:26:00.683229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:55.278 [2024-07-12 19:26:00.683245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:103552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:103560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:103568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:103576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:103584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:103592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:103600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:103608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:103616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:103624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:103632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:103640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:103648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:103656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.278 [2024-07-12 19:26:00.683487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:103664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.278 [2024-07-12 19:26:00.683494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:103672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:103680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:103688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:103696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:103704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:103712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:103720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:103728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:103744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:55.279 [2024-07-12 19:26:00.683660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1749350 is same with the state(5) to be set 00:29:55.279 [2024-07-12 19:26:00.683677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:55.279 [2024-07-12 19:26:00.683683] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:55.279 [2024-07-12 19:26:00.683690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103752 len:8 PRP1 0x0 PRP2 0x0 00:29:55.279 [2024-07-12 19:26:00.683698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:55.279 [2024-07-12 19:26:00.683737] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1749350 was disconnected and freed. reset controller. 00:29:55.279 [2024-07-12 19:26:00.687289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.279 [2024-07-12 19:26:00.687336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.279 [2024-07-12 19:26:00.688338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.279 [2024-07-12 19:26:00.688376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.279 [2024-07-12 19:26:00.688387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.279 [2024-07-12 19:26:00.688629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.279 [2024-07-12 19:26:00.688853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.279 [2024-07-12 19:26:00.688862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.279 [2024-07-12 19:26:00.688871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.279 [2024-07-12 19:26:00.692441] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.279 [2024-07-12 19:26:00.701430] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.279 [2024-07-12 19:26:00.702185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.279 [2024-07-12 19:26:00.702224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.279 [2024-07-12 19:26:00.702236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.279 [2024-07-12 19:26:00.702476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.279 [2024-07-12 19:26:00.702700] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.279 [2024-07-12 19:26:00.702709] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.279 [2024-07-12 19:26:00.702717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.279 [2024-07-12 19:26:00.706280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.279 [2024-07-12 19:26:00.715286] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.279 [2024-07-12 19:26:00.715987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.279 [2024-07-12 19:26:00.716024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.279 [2024-07-12 19:26:00.716035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.279 [2024-07-12 19:26:00.716284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.279 [2024-07-12 19:26:00.716508] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.279 [2024-07-12 19:26:00.716518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.279 [2024-07-12 19:26:00.716525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.279 [2024-07-12 19:26:00.720076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.279 [2024-07-12 19:26:00.729086] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.279 [2024-07-12 19:26:00.729717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.279 [2024-07-12 19:26:00.729755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.279 [2024-07-12 19:26:00.729766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.279 [2024-07-12 19:26:00.730005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.279 [2024-07-12 19:26:00.730237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.279 [2024-07-12 19:26:00.730247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.279 [2024-07-12 19:26:00.730255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.279 [2024-07-12 19:26:00.733802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.279 [2024-07-12 19:26:00.743002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.279 [2024-07-12 19:26:00.743729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.279 [2024-07-12 19:26:00.743767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.279 [2024-07-12 19:26:00.743779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.279 [2024-07-12 19:26:00.744020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.279 [2024-07-12 19:26:00.744253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.279 [2024-07-12 19:26:00.744263] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.279 [2024-07-12 19:26:00.744271] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.279 [2024-07-12 19:26:00.747820] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.279 [2024-07-12 19:26:00.756818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.279 [2024-07-12 19:26:00.757541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.279 [2024-07-12 19:26:00.757578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.279 [2024-07-12 19:26:00.757594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.279 [2024-07-12 19:26:00.757835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.279 [2024-07-12 19:26:00.758059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.279 [2024-07-12 19:26:00.758069] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.279 [2024-07-12 19:26:00.758076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.279 [2024-07-12 19:26:00.761632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.279 [2024-07-12 19:26:00.770666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.279 [2024-07-12 19:26:00.771358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.279 [2024-07-12 19:26:00.771395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.279 [2024-07-12 19:26:00.771408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.279 [2024-07-12 19:26:00.771648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.279 [2024-07-12 19:26:00.771871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.279 [2024-07-12 19:26:00.771881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.279 [2024-07-12 19:26:00.771889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.279 [2024-07-12 19:26:00.775452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.279 [2024-07-12 19:26:00.784657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.279 [2024-07-12 19:26:00.785410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.279 [2024-07-12 19:26:00.785448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.279 [2024-07-12 19:26:00.785459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.785699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.785922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.280 [2024-07-12 19:26:00.785932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.280 [2024-07-12 19:26:00.785939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.280 [2024-07-12 19:26:00.789496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.280 [2024-07-12 19:26:00.798490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.280 [2024-07-12 19:26:00.799128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.280 [2024-07-12 19:26:00.799167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.280 [2024-07-12 19:26:00.799178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.799417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.799641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.280 [2024-07-12 19:26:00.799655] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.280 [2024-07-12 19:26:00.799663] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.280 [2024-07-12 19:26:00.803220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.280 [2024-07-12 19:26:00.812420] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.280 [2024-07-12 19:26:00.813066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.280 [2024-07-12 19:26:00.813084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.280 [2024-07-12 19:26:00.813092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.813318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.813539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.280 [2024-07-12 19:26:00.813547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.280 [2024-07-12 19:26:00.813554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.280 [2024-07-12 19:26:00.817098] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.280 [2024-07-12 19:26:00.826308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.280 [2024-07-12 19:26:00.827030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.280 [2024-07-12 19:26:00.827067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.280 [2024-07-12 19:26:00.827080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.827329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.827554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.280 [2024-07-12 19:26:00.827563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.280 [2024-07-12 19:26:00.827571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.280 [2024-07-12 19:26:00.831119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.280 [2024-07-12 19:26:00.840114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.280 [2024-07-12 19:26:00.840883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.280 [2024-07-12 19:26:00.840921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.280 [2024-07-12 19:26:00.840932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.841178] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.841402] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.280 [2024-07-12 19:26:00.841412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.280 [2024-07-12 19:26:00.841419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.280 [2024-07-12 19:26:00.844969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.280 [2024-07-12 19:26:00.853965] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.280 [2024-07-12 19:26:00.854666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.280 [2024-07-12 19:26:00.854704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.280 [2024-07-12 19:26:00.854714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.854954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.855185] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.280 [2024-07-12 19:26:00.855195] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.280 [2024-07-12 19:26:00.855202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.280 [2024-07-12 19:26:00.858752] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.280 [2024-07-12 19:26:00.867973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.280 [2024-07-12 19:26:00.868722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.280 [2024-07-12 19:26:00.868760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.280 [2024-07-12 19:26:00.868772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.869012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.869243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.280 [2024-07-12 19:26:00.869253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.280 [2024-07-12 19:26:00.869260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.280 [2024-07-12 19:26:00.872809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.280 [2024-07-12 19:26:00.881810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.280 [2024-07-12 19:26:00.882485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.280 [2024-07-12 19:26:00.882523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.280 [2024-07-12 19:26:00.882533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.882772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.882995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.280 [2024-07-12 19:26:00.883005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.280 [2024-07-12 19:26:00.883013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.280 [2024-07-12 19:26:00.886571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.280 [2024-07-12 19:26:00.895777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.280 [2024-07-12 19:26:00.896513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.280 [2024-07-12 19:26:00.896551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.280 [2024-07-12 19:26:00.896562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.280 [2024-07-12 19:26:00.896806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.280 [2024-07-12 19:26:00.897029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:00.897039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:00.897047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:00.900606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:00.909602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:00.910371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:00.910409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:00.910422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:00.910665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:00.910887] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:00.910897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:00.910904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:00.914458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:00.923465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:00.923867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:00.923887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:00.923896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:00.924116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:00.924343] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:00.924352] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:00.924359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:00.927904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:00.937310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:00.937939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:00.937955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:00.937963] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:00.938186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:00.938406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:00.938415] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:00.938426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:00.941969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:00.951171] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:00.951854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:00.951892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:00.951903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:00.952151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:00.952375] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:00.952385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:00.952393] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:00.955942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:00.965165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:00.965892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:00.965930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:00.965941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:00.966189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:00.966413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:00.966423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:00.966431] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:00.970030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:00.979060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:00.979749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:00.979787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:00.979798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:00.980037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:00.980267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:00.980277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:00.980285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:00.983835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:00.993037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:00.993575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:00.993617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:00.993628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:00.993867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:00.994090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:00.994099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:00.994107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:00.997663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:01.006866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:01.007587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:01.007625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:01.007636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:01.007875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:01.008098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:01.008108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:01.008115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:01.011672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:01.020672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:01.021430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:01.021467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:01.021478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:01.021717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:01.021941] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:01.021951] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:01.021958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:01.025516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:01.034513] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:01.035149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:01.035188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:01.035200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.281 [2024-07-12 19:26:01.035442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.281 [2024-07-12 19:26:01.035671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.281 [2024-07-12 19:26:01.035680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.281 [2024-07-12 19:26:01.035688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.281 [2024-07-12 19:26:01.039246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.281 [2024-07-12 19:26:01.048458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.281 [2024-07-12 19:26:01.049025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.281 [2024-07-12 19:26:01.049044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.281 [2024-07-12 19:26:01.049052] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.049277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.049497] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.049506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.049514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.053056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.062255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.062916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.062953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.062964] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.063212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.063436] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.063446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.063454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.067002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.076215] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.076837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.076855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.076863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.077083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.077310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.077319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.077326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.080877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.090130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.090746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.090783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.090794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.091033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.091266] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.091277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.091285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.094836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.104041] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.104651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.104670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.104678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.104897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.105117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.105131] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.105139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.108681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.117889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.118577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.118615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.118626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.118865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.119087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.119098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.119106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.122683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.131734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.132452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.132491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.132506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.132746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.132969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.132978] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.132986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.136555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.145564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.146082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.146101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.146109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.146336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.146556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.146566] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.146573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.150116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.159542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.160351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.160389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.160400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.160640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.160863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.160872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.160880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.164447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.173449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.174090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.174110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.174117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.174344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.174565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.174581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.174588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.178139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.187378] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.187975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.187992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.282 [2024-07-12 19:26:01.188000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.282 [2024-07-12 19:26:01.188224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.282 [2024-07-12 19:26:01.188444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.282 [2024-07-12 19:26:01.188453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.282 [2024-07-12 19:26:01.188460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.282 [2024-07-12 19:26:01.192006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.282 [2024-07-12 19:26:01.201216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.282 [2024-07-12 19:26:01.201843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.282 [2024-07-12 19:26:01.201858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.201866] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.202085] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.202312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.202322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.202329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.205872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.215080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.215764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.215801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.215812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.216052] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.216285] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.216295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.216303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.219854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.228880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.229589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.229627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.229638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.229877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.230100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.230110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.230118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.233681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.242689] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.243409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.243447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.243458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.243697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.243920] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.243930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.243938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.247493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.256486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.257045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.257065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.257073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.257300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.257521] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.257530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.257537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.261077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.270289] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.270858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.270896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.270907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.271161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.271385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.271395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.271403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.274954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.284173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.284815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.284834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.284842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.285062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.285289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.285299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.285306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.288854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.298065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.298706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.298722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.298729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.298948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.299173] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.299182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.299188] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.302735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.311944] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.312544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.312561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.312569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.312788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.313008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.313016] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.313027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.316579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.325798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.326402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.326418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.326425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.326644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.326864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.326872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.326880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.330431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.339636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.340151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.340167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.283 [2024-07-12 19:26:01.340174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.283 [2024-07-12 19:26:01.340393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.283 [2024-07-12 19:26:01.340612] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.283 [2024-07-12 19:26:01.340621] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.283 [2024-07-12 19:26:01.340628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.283 [2024-07-12 19:26:01.344177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.283 [2024-07-12 19:26:01.353596] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.283 [2024-07-12 19:26:01.354223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.283 [2024-07-12 19:26:01.354239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.284 [2024-07-12 19:26:01.354246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.284 [2024-07-12 19:26:01.354465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.284 [2024-07-12 19:26:01.354685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.284 [2024-07-12 19:26:01.354694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.284 [2024-07-12 19:26:01.354700] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.284 [2024-07-12 19:26:01.358248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.284 [2024-07-12 19:26:01.367458] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.284 [2024-07-12 19:26:01.368058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.284 [2024-07-12 19:26:01.368073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.284 [2024-07-12 19:26:01.368080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.284 [2024-07-12 19:26:01.368305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.284 [2024-07-12 19:26:01.368526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.284 [2024-07-12 19:26:01.368535] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.284 [2024-07-12 19:26:01.368542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.284 [2024-07-12 19:26:01.372087] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.284 [2024-07-12 19:26:01.381299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.284 [2024-07-12 19:26:01.381926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.284 [2024-07-12 19:26:01.381941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.284 [2024-07-12 19:26:01.381949] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.284 [2024-07-12 19:26:01.382172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.284 [2024-07-12 19:26:01.382392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.284 [2024-07-12 19:26:01.382401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.284 [2024-07-12 19:26:01.382407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.284 [2024-07-12 19:26:01.386179] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.284 [2024-07-12 19:26:01.395218] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.284 [2024-07-12 19:26:01.395905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.284 [2024-07-12 19:26:01.395943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.284 [2024-07-12 19:26:01.395956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.284 [2024-07-12 19:26:01.396206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.284 [2024-07-12 19:26:01.396430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.284 [2024-07-12 19:26:01.396439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.284 [2024-07-12 19:26:01.396447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.284 [2024-07-12 19:26:01.399999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.545 [2024-07-12 19:26:01.409221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.409828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.409847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.409855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.410080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.410308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.410317] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.410324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.413874] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.423098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.423831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.423869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.423880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.424119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.424352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.424362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.424370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.427921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.436926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.437545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.437565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.437573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.437793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.438012] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.438021] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.438028] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.441582] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.450794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.451488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.451525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.451536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.451775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.451998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.452008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.452021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.455580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.464595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.465239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.465258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.465266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.465486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.465706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.465715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.465722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.469273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.478482] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.479112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.479217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.479225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.479445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.479664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.479673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.479680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.483230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.492439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.493071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.493087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.493094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.493318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.493538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.493547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.493554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.497097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.506310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.506936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.506955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.506962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.507187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.507407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.507416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.507423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.510965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.520179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.520805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.520819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.520827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.521045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.521270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.521279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.521286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.524830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.534032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.534626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.534642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.534649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.534868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.535088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.546 [2024-07-12 19:26:01.535096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.546 [2024-07-12 19:26:01.535103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.546 [2024-07-12 19:26:01.538650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.546 [2024-07-12 19:26:01.547849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.546 [2024-07-12 19:26:01.548505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.546 [2024-07-12 19:26:01.548543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.546 [2024-07-12 19:26:01.548554] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.546 [2024-07-12 19:26:01.548793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.546 [2024-07-12 19:26:01.549021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.549033] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.549040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.552598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.561797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.562519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.562557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.562567] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.562806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.563030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.563039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.563047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.566605] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.575591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.576332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.576369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.576380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.576618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.576841] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.576851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.576859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.580416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.589407] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.590149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.590187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.590199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.590440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.590663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.590672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.590680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.594241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.603257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.604003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.604041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.604051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.604300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.604524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.604533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.604541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.608086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.617073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.617568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.617588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.617596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.617816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.618035] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.618044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.618051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.621609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.631008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.631642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.631658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.631666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.631884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.632103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.632112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.632119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.635665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.644853] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.645444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.645460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.645472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.645691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.645911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.645919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.645926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.649473] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.658666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.659246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.659262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.659270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.659489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.659709] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.659717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.659724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.547 [2024-07-12 19:26:01.663266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.547 [2024-07-12 19:26:01.672467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.547 [2024-07-12 19:26:01.673192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.547 [2024-07-12 19:26:01.673230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.547 [2024-07-12 19:26:01.673241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.547 [2024-07-12 19:26:01.673480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.547 [2024-07-12 19:26:01.673703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.547 [2024-07-12 19:26:01.673713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.547 [2024-07-12 19:26:01.673720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.810 [2024-07-12 19:26:01.677280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.810 [2024-07-12 19:26:01.686272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.810 [2024-07-12 19:26:01.687012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.810 [2024-07-12 19:26:01.687050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.810 [2024-07-12 19:26:01.687061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.810 [2024-07-12 19:26:01.687311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.810 [2024-07-12 19:26:01.687536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.810 [2024-07-12 19:26:01.687550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.810 [2024-07-12 19:26:01.687558] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.810 [2024-07-12 19:26:01.691107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.810 [2024-07-12 19:26:01.700106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.810 [2024-07-12 19:26:01.700809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.810 [2024-07-12 19:26:01.700847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.810 [2024-07-12 19:26:01.700857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.810 [2024-07-12 19:26:01.701096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.810 [2024-07-12 19:26:01.701329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.810 [2024-07-12 19:26:01.701339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.810 [2024-07-12 19:26:01.701347] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.810 [2024-07-12 19:26:01.704894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.810 [2024-07-12 19:26:01.714098] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.810 [2024-07-12 19:26:01.714846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.810 [2024-07-12 19:26:01.714883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.810 [2024-07-12 19:26:01.714894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.810 [2024-07-12 19:26:01.715143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.810 [2024-07-12 19:26:01.715366] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.810 [2024-07-12 19:26:01.715376] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.810 [2024-07-12 19:26:01.715383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.810 [2024-07-12 19:26:01.719011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.810 [2024-07-12 19:26:01.728026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.810 [2024-07-12 19:26:01.728773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.810 [2024-07-12 19:26:01.728811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.810 [2024-07-12 19:26:01.728822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.810 [2024-07-12 19:26:01.729061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.810 [2024-07-12 19:26:01.729295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.810 [2024-07-12 19:26:01.729305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.810 [2024-07-12 19:26:01.729313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.810 [2024-07-12 19:26:01.732860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.810 [2024-07-12 19:26:01.741905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.810 [2024-07-12 19:26:01.742651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.810 [2024-07-12 19:26:01.742689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.810 [2024-07-12 19:26:01.742699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.810 [2024-07-12 19:26:01.742938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.810 [2024-07-12 19:26:01.743172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.810 [2024-07-12 19:26:01.743183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.810 [2024-07-12 19:26:01.743190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.810 [2024-07-12 19:26:01.746738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.810 [2024-07-12 19:26:01.755729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.810 [2024-07-12 19:26:01.756382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.810 [2024-07-12 19:26:01.756419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.810 [2024-07-12 19:26:01.756430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.810 [2024-07-12 19:26:01.756669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.810 [2024-07-12 19:26:01.756892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.810 [2024-07-12 19:26:01.756901] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.810 [2024-07-12 19:26:01.756909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.810 [2024-07-12 19:26:01.760467] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.810 [2024-07-12 19:26:01.769664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.810 [2024-07-12 19:26:01.770370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.810 [2024-07-12 19:26:01.770408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.810 [2024-07-12 19:26:01.770419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.810 [2024-07-12 19:26:01.770658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.810 [2024-07-12 19:26:01.770882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.810 [2024-07-12 19:26:01.770891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.810 [2024-07-12 19:26:01.770899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.810 [2024-07-12 19:26:01.774456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.810 [2024-07-12 19:26:01.783653] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.810 [2024-07-12 19:26:01.784286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.810 [2024-07-12 19:26:01.784305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.784313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.784539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.784758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.784767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.784774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.788320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.797508] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.798189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.798227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.798237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.798476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.798699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.798710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.798717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.802277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.811504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.812235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.812272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.812283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.812522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.812745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.812754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.812762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.816319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.825314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.826040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.826077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.826088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.826336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.826560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.826570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.826582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.830126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.839111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.839847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.839885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.839895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.840145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.840369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.840378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.840386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.843934] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.852942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.853666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.853704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.853715] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.853954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.854187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.854197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.854205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.857755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.866760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.867377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.867396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.867404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.867624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.867844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.867853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.867860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.871415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.880626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.881358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.881395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.881406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.881645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.881868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.881878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.881886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.885437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.894443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.895180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.895217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.895230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.895471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.895694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.895704] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.895711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.899264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.908255] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.908948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.908985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.908996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.909244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.909469] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.909478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.909485] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.913032] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.922243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.811 [2024-07-12 19:26:01.922882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.811 [2024-07-12 19:26:01.922920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.811 [2024-07-12 19:26:01.922930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.811 [2024-07-12 19:26:01.923183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.811 [2024-07-12 19:26:01.923407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.811 [2024-07-12 19:26:01.923417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.811 [2024-07-12 19:26:01.923424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:55.811 [2024-07-12 19:26:01.926973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:55.811 [2024-07-12 19:26:01.936184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:55.812 [2024-07-12 19:26:01.936902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:55.812 [2024-07-12 19:26:01.936939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:55.812 [2024-07-12 19:26:01.936950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:55.812 [2024-07-12 19:26:01.937199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:55.812 [2024-07-12 19:26:01.937423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:55.812 [2024-07-12 19:26:01.937432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:55.812 [2024-07-12 19:26:01.937441] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:01.940990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:01.949994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:01.950738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:01.950776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:01.950787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:01.951026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:01.951260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:01.951270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:01.951278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:01.954828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:01.963831] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:01.964458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:01.964478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:01.964485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:01.964705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:01.964925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:01.964933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:01.964944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:01.968494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:01.977706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:01.978413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:01.978451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:01.978462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:01.978703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:01.978926] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:01.978936] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:01.978944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:01.982498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:01.991696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:01.992356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:01.992393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:01.992404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:01.992642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:01.992866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:01.992876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:01.992884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:01.996436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:02.005636] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:02.006347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:02.006384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:02.006395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:02.006634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:02.006857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:02.006866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:02.006874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:02.010429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:02.019448] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:02.020140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:02.020182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:02.020195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:02.020435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:02.020658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:02.020668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:02.020676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:02.024242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:02.033443] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:02.034160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:02.034198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:02.034209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:02.034448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:02.034671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:02.034681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:02.034689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:02.038247] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:02.047253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:02.047991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:02.048029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:02.048040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:02.048289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:02.048513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:02.048522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:02.048530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:02.052076] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:02.061066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:02.061777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:02.061815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:02.061825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:02.062064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:02.062303] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.073 [2024-07-12 19:26:02.062314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.073 [2024-07-12 19:26:02.062321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.073 [2024-07-12 19:26:02.065869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.073 [2024-07-12 19:26:02.074867] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.073 [2024-07-12 19:26:02.075596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.073 [2024-07-12 19:26:02.075633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.073 [2024-07-12 19:26:02.075645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.073 [2024-07-12 19:26:02.075884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.073 [2024-07-12 19:26:02.076107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.076117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.076133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.079685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.088693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.089350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.089369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.089378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.089598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.089817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.089826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.089833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.093384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.102593] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.103213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.103229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.103238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.103456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.103676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.103685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.103692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.107245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.116453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.116954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.116970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.116977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.117202] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.117423] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.117431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.117438] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.120981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.130408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.131140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.131177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.131189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.131430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.131654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.131663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.131670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.135221] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.144210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.144699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.144719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.144727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.144948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.145181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.145193] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.145200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.148740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.158146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.158869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.158907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.158922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.159171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.159395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.159404] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.159411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.162958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.171952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.172648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.172685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.172696] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.172935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.173168] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.173178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.173186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.176735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.185934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.186625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.186663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.186674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.186913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.187144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.187154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.187163] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.074 [2024-07-12 19:26:02.190708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.074 [2024-07-12 19:26:02.199902] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.074 [2024-07-12 19:26:02.200605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.074 [2024-07-12 19:26:02.200642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.074 [2024-07-12 19:26:02.200653] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.074 [2024-07-12 19:26:02.200892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.074 [2024-07-12 19:26:02.201116] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.074 [2024-07-12 19:26:02.201140] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.074 [2024-07-12 19:26:02.201148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.335 [2024-07-12 19:26:02.204698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.335 [2024-07-12 19:26:02.213898] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.335 [2024-07-12 19:26:02.214640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.335 [2024-07-12 19:26:02.214678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.335 [2024-07-12 19:26:02.214689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.335 [2024-07-12 19:26:02.214928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.335 [2024-07-12 19:26:02.215160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.335 [2024-07-12 19:26:02.215170] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.335 [2024-07-12 19:26:02.215178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.335 [2024-07-12 19:26:02.218724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.335 [2024-07-12 19:26:02.227754] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.335 [2024-07-12 19:26:02.228357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.335 [2024-07-12 19:26:02.228377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.335 [2024-07-12 19:26:02.228385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.228604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.228826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.228835] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.228842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.232390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.241585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.242217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.242234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.242241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.242460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.242680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.242688] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.242695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.246239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.255434] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.256139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.256177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.256189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.256430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.256654] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.256663] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.256671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.260229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.269426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.270160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.270198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.270210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.270451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.270674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.270683] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.270691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.274249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.283236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.283970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.284008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.284019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.284266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.284491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.284502] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.284510] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.288056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.297052] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.297797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.297835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.297845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.298089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.298321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.298332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.298339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.301888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.310882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.311582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.311619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.311630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.311869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.312092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.312102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.312110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.315667] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.324876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.325576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.325614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.325625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.325864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.326087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.326097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.326105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.329663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.338862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.339576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.339613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.339624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.339863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.340086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.340096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.340108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.343663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.352656] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.353413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.353451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.353462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.353701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.353924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.353934] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.353941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.357496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.366487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.367224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.336 [2024-07-12 19:26:02.367262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.336 [2024-07-12 19:26:02.367273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.336 [2024-07-12 19:26:02.367511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.336 [2024-07-12 19:26:02.367734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.336 [2024-07-12 19:26:02.367744] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.336 [2024-07-12 19:26:02.367752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.336 [2024-07-12 19:26:02.371308] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.336 [2024-07-12 19:26:02.380299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.336 [2024-07-12 19:26:02.381013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.337 [2024-07-12 19:26:02.381050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.337 [2024-07-12 19:26:02.381061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.337 [2024-07-12 19:26:02.381309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.337 [2024-07-12 19:26:02.381533] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.337 [2024-07-12 19:26:02.381543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.337 [2024-07-12 19:26:02.381550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.337 [2024-07-12 19:26:02.385284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.337 [2024-07-12 19:26:02.394288] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.337 [2024-07-12 19:26:02.395031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.337 [2024-07-12 19:26:02.395069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.337 [2024-07-12 19:26:02.395081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.337 [2024-07-12 19:26:02.395331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.337 [2024-07-12 19:26:02.395555] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.337 [2024-07-12 19:26:02.395565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.337 [2024-07-12 19:26:02.395572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.337 [2024-07-12 19:26:02.399119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.337 [2024-07-12 19:26:02.408108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.337 [2024-07-12 19:26:02.408772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.337 [2024-07-12 19:26:02.408810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.337 [2024-07-12 19:26:02.408821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.337 [2024-07-12 19:26:02.409060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.337 [2024-07-12 19:26:02.409292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.337 [2024-07-12 19:26:02.409302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.337 [2024-07-12 19:26:02.409309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.337 [2024-07-12 19:26:02.412855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.337 [2024-07-12 19:26:02.422047] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.337 [2024-07-12 19:26:02.422643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.337 [2024-07-12 19:26:02.422679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.337 [2024-07-12 19:26:02.422690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.337 [2024-07-12 19:26:02.422928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.337 [2024-07-12 19:26:02.423161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.337 [2024-07-12 19:26:02.423171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.337 [2024-07-12 19:26:02.423179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.337 [2024-07-12 19:26:02.426724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.337 [2024-07-12 19:26:02.435951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.337 [2024-07-12 19:26:02.436606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.337 [2024-07-12 19:26:02.436624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.337 [2024-07-12 19:26:02.436632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.337 [2024-07-12 19:26:02.436852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.337 [2024-07-12 19:26:02.437077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.337 [2024-07-12 19:26:02.437086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.337 [2024-07-12 19:26:02.437093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.337 [2024-07-12 19:26:02.440663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.337 [2024-07-12 19:26:02.449860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.337 [2024-07-12 19:26:02.450550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.337 [2024-07-12 19:26:02.450588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.337 [2024-07-12 19:26:02.450599] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.337 [2024-07-12 19:26:02.450837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.337 [2024-07-12 19:26:02.451061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.337 [2024-07-12 19:26:02.451071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.337 [2024-07-12 19:26:02.451078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.337 [2024-07-12 19:26:02.454637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.337 [2024-07-12 19:26:02.463839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.599 [2024-07-12 19:26:02.464551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-12 19:26:02.464589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.599 [2024-07-12 19:26:02.464600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.599 [2024-07-12 19:26:02.464839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.599 [2024-07-12 19:26:02.465062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.599 [2024-07-12 19:26:02.465072] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.599 [2024-07-12 19:26:02.465079] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.599 [2024-07-12 19:26:02.468634] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.599 [2024-07-12 19:26:02.477828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.599 [2024-07-12 19:26:02.478506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-12 19:26:02.478544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.599 [2024-07-12 19:26:02.478555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.599 [2024-07-12 19:26:02.478794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.599 [2024-07-12 19:26:02.479017] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.599 [2024-07-12 19:26:02.479027] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.599 [2024-07-12 19:26:02.479034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.599 [2024-07-12 19:26:02.482594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.599 [2024-07-12 19:26:02.491794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.599 [2024-07-12 19:26:02.492473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-12 19:26:02.492510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.599 [2024-07-12 19:26:02.492521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.599 [2024-07-12 19:26:02.492760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.599 [2024-07-12 19:26:02.492983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.599 [2024-07-12 19:26:02.492993] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.599 [2024-07-12 19:26:02.493001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.599 [2024-07-12 19:26:02.496559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.599 [2024-07-12 19:26:02.505797] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.599 [2024-07-12 19:26:02.506499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.599 [2024-07-12 19:26:02.506536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.600 [2024-07-12 19:26:02.506548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.600 [2024-07-12 19:26:02.506787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.600 [2024-07-12 19:26:02.507010] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.600 [2024-07-12 19:26:02.507019] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.600 [2024-07-12 19:26:02.507027] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.600 [2024-07-12 19:26:02.510584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.600 [2024-07-12 19:26:02.519786] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.600 [2024-07-12 19:26:02.520393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-12 19:26:02.520412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.600 [2024-07-12 19:26:02.520420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.600 [2024-07-12 19:26:02.520640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.600 [2024-07-12 19:26:02.520860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.600 [2024-07-12 19:26:02.520869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.600 [2024-07-12 19:26:02.520876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.600 [2024-07-12 19:26:02.524435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.600 [2024-07-12 19:26:02.533631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.600 [2024-07-12 19:26:02.534081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-12 19:26:02.534101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.600 [2024-07-12 19:26:02.534113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.600 [2024-07-12 19:26:02.534340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.600 [2024-07-12 19:26:02.534561] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.600 [2024-07-12 19:26:02.534570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.600 [2024-07-12 19:26:02.534577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.600 [2024-07-12 19:26:02.538119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.600 [2024-07-12 19:26:02.547527] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.600 [2024-07-12 19:26:02.548222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-12 19:26:02.548259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.600 [2024-07-12 19:26:02.548272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.600 [2024-07-12 19:26:02.548513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.600 [2024-07-12 19:26:02.548737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.600 [2024-07-12 19:26:02.548747] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.600 [2024-07-12 19:26:02.548756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.600 [2024-07-12 19:26:02.552314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.600 [2024-07-12 19:26:02.561517] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.600 [2024-07-12 19:26:02.562228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-12 19:26:02.562266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.600 [2024-07-12 19:26:02.562279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.600 [2024-07-12 19:26:02.562522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.600 [2024-07-12 19:26:02.562745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.600 [2024-07-12 19:26:02.562754] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.600 [2024-07-12 19:26:02.562762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.600 [2024-07-12 19:26:02.566318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.600 [2024-07-12 19:26:02.575311] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.600 [2024-07-12 19:26:02.575797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-12 19:26:02.575818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.600 [2024-07-12 19:26:02.575827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.600 [2024-07-12 19:26:02.576047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.600 [2024-07-12 19:26:02.576279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.600 [2024-07-12 19:26:02.576290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.600 [2024-07-12 19:26:02.576297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.600 [2024-07-12 19:26:02.579843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.600 [2024-07-12 19:26:02.589257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.600 [2024-07-12 19:26:02.589936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-12 19:26:02.589973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.600 [2024-07-12 19:26:02.589984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.600 [2024-07-12 19:26:02.590231] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.600 [2024-07-12 19:26:02.590456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.600 [2024-07-12 19:26:02.590465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.600 [2024-07-12 19:26:02.590473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.600 [2024-07-12 19:26:02.594022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.600 [2024-07-12 19:26:02.603224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.600 [2024-07-12 19:26:02.603970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.600 [2024-07-12 19:26:02.604007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.600 [2024-07-12 19:26:02.604018] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.601 [2024-07-12 19:26:02.604265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.601 [2024-07-12 19:26:02.604489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.601 [2024-07-12 19:26:02.604499] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.601 [2024-07-12 19:26:02.604507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.601 [2024-07-12 19:26:02.608053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.601 [2024-07-12 19:26:02.617044] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.601 [2024-07-12 19:26:02.617658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-12 19:26:02.617677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.601 [2024-07-12 19:26:02.617684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.601 [2024-07-12 19:26:02.617904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.601 [2024-07-12 19:26:02.618129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.601 [2024-07-12 19:26:02.618138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.601 [2024-07-12 19:26:02.618146] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.601 [2024-07-12 19:26:02.621689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.601 [2024-07-12 19:26:02.630905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.601 [2024-07-12 19:26:02.631513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-12 19:26:02.631530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.601 [2024-07-12 19:26:02.631538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.601 [2024-07-12 19:26:02.631756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.601 [2024-07-12 19:26:02.631975] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.601 [2024-07-12 19:26:02.631984] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.601 [2024-07-12 19:26:02.631991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.601 [2024-07-12 19:26:02.635554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.601 [2024-07-12 19:26:02.644782] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.601 [2024-07-12 19:26:02.645384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-12 19:26:02.645402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.601 [2024-07-12 19:26:02.645409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.601 [2024-07-12 19:26:02.645628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.601 [2024-07-12 19:26:02.645848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.601 [2024-07-12 19:26:02.645856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.601 [2024-07-12 19:26:02.645863] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.601 [2024-07-12 19:26:02.649410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.601 [2024-07-12 19:26:02.658607] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.601 [2024-07-12 19:26:02.659118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-12 19:26:02.659138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.601 [2024-07-12 19:26:02.659145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.601 [2024-07-12 19:26:02.659364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.601 [2024-07-12 19:26:02.659583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.601 [2024-07-12 19:26:02.659591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.601 [2024-07-12 19:26:02.659598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.601 [2024-07-12 19:26:02.663141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.601 [2024-07-12 19:26:02.672547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.601 [2024-07-12 19:26:02.673173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-12 19:26:02.673188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.601 [2024-07-12 19:26:02.673200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.601 [2024-07-12 19:26:02.673418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.601 [2024-07-12 19:26:02.673637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.601 [2024-07-12 19:26:02.673646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.601 [2024-07-12 19:26:02.673654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.601 [2024-07-12 19:26:02.677197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.601 [2024-07-12 19:26:02.686393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.601 [2024-07-12 19:26:02.687119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-12 19:26:02.687165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.601 [2024-07-12 19:26:02.687176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.601 [2024-07-12 19:26:02.687415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.601 [2024-07-12 19:26:02.687640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.601 [2024-07-12 19:26:02.687650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.601 [2024-07-12 19:26:02.687658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.601 [2024-07-12 19:26:02.691212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.601 [2024-07-12 19:26:02.700206] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.601 [2024-07-12 19:26:02.700932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.601 [2024-07-12 19:26:02.700970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.601 [2024-07-12 19:26:02.700981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.601 [2024-07-12 19:26:02.701227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.602 [2024-07-12 19:26:02.701451] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.602 [2024-07-12 19:26:02.701460] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.602 [2024-07-12 19:26:02.701468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.602 [2024-07-12 19:26:02.705017] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.602 [2024-07-12 19:26:02.714013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.602 [2024-07-12 19:26:02.714628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.602 [2024-07-12 19:26:02.714647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.602 [2024-07-12 19:26:02.714655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.602 [2024-07-12 19:26:02.714874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.602 [2024-07-12 19:26:02.715094] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.602 [2024-07-12 19:26:02.715107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.602 [2024-07-12 19:26:02.715115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.602 [2024-07-12 19:26:02.718660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.602 [2024-07-12 19:26:02.727873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.867 [2024-07-12 19:26:02.728420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-07-12 19:26:02.728437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.867 [2024-07-12 19:26:02.728445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.867 [2024-07-12 19:26:02.728663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.867 [2024-07-12 19:26:02.728883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.867 [2024-07-12 19:26:02.728893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.867 [2024-07-12 19:26:02.728900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.867 [2024-07-12 19:26:02.732445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.867 [2024-07-12 19:26:02.741857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.867 [2024-07-12 19:26:02.742493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-07-12 19:26:02.742509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.867 [2024-07-12 19:26:02.742517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.867 [2024-07-12 19:26:02.742735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.867 [2024-07-12 19:26:02.742954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.867 [2024-07-12 19:26:02.742963] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.867 [2024-07-12 19:26:02.742970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.867 [2024-07-12 19:26:02.746515] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.867 [2024-07-12 19:26:02.755784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.867 [2024-07-12 19:26:02.756380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-07-12 19:26:02.756397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.867 [2024-07-12 19:26:02.756405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.867 [2024-07-12 19:26:02.756624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.867 [2024-07-12 19:26:02.756844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.867 [2024-07-12 19:26:02.756853] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.867 [2024-07-12 19:26:02.756860] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.867 [2024-07-12 19:26:02.760425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.867 [2024-07-12 19:26:02.769627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.867 [2024-07-12 19:26:02.770401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-07-12 19:26:02.770439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.867 [2024-07-12 19:26:02.770450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.867 [2024-07-12 19:26:02.770689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.867 [2024-07-12 19:26:02.770912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.867 [2024-07-12 19:26:02.770922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.867 [2024-07-12 19:26:02.770930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.867 [2024-07-12 19:26:02.774482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.867 [2024-07-12 19:26:02.783477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.867 [2024-07-12 19:26:02.784093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-07-12 19:26:02.784137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.867 [2024-07-12 19:26:02.784151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.867 [2024-07-12 19:26:02.784391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.867 [2024-07-12 19:26:02.784614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.867 [2024-07-12 19:26:02.784624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.867 [2024-07-12 19:26:02.784632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.867 [2024-07-12 19:26:02.788187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.867 [2024-07-12 19:26:02.797391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.867 [2024-07-12 19:26:02.798102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.867 [2024-07-12 19:26:02.798146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.867 [2024-07-12 19:26:02.798158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.867 [2024-07-12 19:26:02.798398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.798621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.798631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.798639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.802191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.868 [2024-07-12 19:26:02.811186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.868 [2024-07-12 19:26:02.811905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-07-12 19:26:02.811943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.868 [2024-07-12 19:26:02.811954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.868 [2024-07-12 19:26:02.812206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.812430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.812440] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.812448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.815997] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.868 [2024-07-12 19:26:02.825002] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.868 [2024-07-12 19:26:02.825685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-07-12 19:26:02.825723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.868 [2024-07-12 19:26:02.825734] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.868 [2024-07-12 19:26:02.825973] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.826204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.826214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.826221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.829771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.868 [2024-07-12 19:26:02.838980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.868 [2024-07-12 19:26:02.839700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-07-12 19:26:02.839738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.868 [2024-07-12 19:26:02.839749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.868 [2024-07-12 19:26:02.839988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.840219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.840229] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.840236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.843784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.868 [2024-07-12 19:26:02.852810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.868 [2024-07-12 19:26:02.853529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-07-12 19:26:02.853566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.868 [2024-07-12 19:26:02.853577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.868 [2024-07-12 19:26:02.853816] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.854039] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.854048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.854060] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.857616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.868 [2024-07-12 19:26:02.866614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.868 [2024-07-12 19:26:02.867405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-07-12 19:26:02.867443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.868 [2024-07-12 19:26:02.867455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.868 [2024-07-12 19:26:02.867694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.867917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.867927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.867935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.871491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.868 [2024-07-12 19:26:02.880487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.868 [2024-07-12 19:26:02.881134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-07-12 19:26:02.881154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.868 [2024-07-12 19:26:02.881162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.868 [2024-07-12 19:26:02.881382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.881602] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.881611] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.881618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.885168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.868 [2024-07-12 19:26:02.894368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.868 [2024-07-12 19:26:02.895100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-07-12 19:26:02.895145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.868 [2024-07-12 19:26:02.895158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.868 [2024-07-12 19:26:02.895398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.895622] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.895631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.895639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.899190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.868 [2024-07-12 19:26:02.908186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.868 [2024-07-12 19:26:02.908907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.868 [2024-07-12 19:26:02.908951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.868 [2024-07-12 19:26:02.908962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.868 [2024-07-12 19:26:02.909209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.868 [2024-07-12 19:26:02.909432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.868 [2024-07-12 19:26:02.909443] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.868 [2024-07-12 19:26:02.909450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.868 [2024-07-12 19:26:02.912998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.869 [2024-07-12 19:26:02.921993] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.869 [2024-07-12 19:26:02.922622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-07-12 19:26:02.922641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.869 [2024-07-12 19:26:02.922649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.869 [2024-07-12 19:26:02.922869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.869 [2024-07-12 19:26:02.923088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.869 [2024-07-12 19:26:02.923097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.869 [2024-07-12 19:26:02.923104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.869 [2024-07-12 19:26:02.926650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.869 [2024-07-12 19:26:02.935848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.869 [2024-07-12 19:26:02.936477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-07-12 19:26:02.936494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.869 [2024-07-12 19:26:02.936501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.869 [2024-07-12 19:26:02.936720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.869 [2024-07-12 19:26:02.936940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.869 [2024-07-12 19:26:02.936948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.869 [2024-07-12 19:26:02.936955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.869 [2024-07-12 19:26:02.940500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.869 [2024-07-12 19:26:02.949695] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.869 [2024-07-12 19:26:02.950418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-07-12 19:26:02.950455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.869 [2024-07-12 19:26:02.950466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.869 [2024-07-12 19:26:02.950706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.869 [2024-07-12 19:26:02.950934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.869 [2024-07-12 19:26:02.950943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.869 [2024-07-12 19:26:02.950951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.869 [2024-07-12 19:26:02.954508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.869 [2024-07-12 19:26:02.963504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.869 [2024-07-12 19:26:02.964077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-07-12 19:26:02.964115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.869 [2024-07-12 19:26:02.964134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.869 [2024-07-12 19:26:02.964374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.869 [2024-07-12 19:26:02.964598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.869 [2024-07-12 19:26:02.964607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.869 [2024-07-12 19:26:02.964615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.869 [2024-07-12 19:26:02.968167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.869 [2024-07-12 19:26:02.977372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.869 [2024-07-12 19:26:02.978112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-07-12 19:26:02.978156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.869 [2024-07-12 19:26:02.978168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.869 [2024-07-12 19:26:02.978407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.869 [2024-07-12 19:26:02.978630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.869 [2024-07-12 19:26:02.978639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.869 [2024-07-12 19:26:02.978647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:56.869 [2024-07-12 19:26:02.982199] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:56.869 [2024-07-12 19:26:02.991195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:56.869 [2024-07-12 19:26:02.991937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:56.869 [2024-07-12 19:26:02.991975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:56.869 [2024-07-12 19:26:02.991986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:56.869 [2024-07-12 19:26:02.992233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:56.869 [2024-07-12 19:26:02.992457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:56.869 [2024-07-12 19:26:02.992467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:56.869 [2024-07-12 19:26:02.992475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.133 [2024-07-12 19:26:02.996027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.133 [2024-07-12 19:26:03.005028] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.133 [2024-07-12 19:26:03.005643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-12 19:26:03.005662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.133 [2024-07-12 19:26:03.005671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.133 [2024-07-12 19:26:03.005891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.133 [2024-07-12 19:26:03.006111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.133 [2024-07-12 19:26:03.006120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.133 [2024-07-12 19:26:03.006133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.133 [2024-07-12 19:26:03.009674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.133 [2024-07-12 19:26:03.018873] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.133 [2024-07-12 19:26:03.019502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-12 19:26:03.019540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.133 [2024-07-12 19:26:03.019550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.133 [2024-07-12 19:26:03.019789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.133 [2024-07-12 19:26:03.020013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.133 [2024-07-12 19:26:03.020022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.133 [2024-07-12 19:26:03.020030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.133 [2024-07-12 19:26:03.023600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.133 [2024-07-12 19:26:03.032804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.133 [2024-07-12 19:26:03.033552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-12 19:26:03.033590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.133 [2024-07-12 19:26:03.033601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.133 [2024-07-12 19:26:03.033839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.133 [2024-07-12 19:26:03.034063] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.133 [2024-07-12 19:26:03.034073] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.133 [2024-07-12 19:26:03.034081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.133 [2024-07-12 19:26:03.037638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.133 [2024-07-12 19:26:03.046630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.133 [2024-07-12 19:26:03.047369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.133 [2024-07-12 19:26:03.047407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.133 [2024-07-12 19:26:03.047422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.047662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.047886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.047895] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.047903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.051461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.060480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.061226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.061264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.061277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.061517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.061741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.061750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.061758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.065317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.074310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.075030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.075067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.075079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.075328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.075551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.075560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.075568] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.079115] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.088110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.088747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.088785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.088797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.089036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.089267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.089281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.089289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.092838] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.102039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.102653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.102672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.102680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.102900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.103120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.103134] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.103142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.106685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.115882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.116505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.116543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.116555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.116795] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.117019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.117028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.117036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.120594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.129814] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.130536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.130574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.130584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.130824] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.131047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.131056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.131064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.134619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.143620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.144280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.144299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.144307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.144527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.144747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.144755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.144762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.148310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.157505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.158171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.158209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.158221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.158464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.158687] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.158697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.158705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.162258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.171457] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.171982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.172000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.172008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.172235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.172455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.172464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.172471] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.176011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.134 [2024-07-12 19:26:03.185418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.134 [2024-07-12 19:26:03.186150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.134 [2024-07-12 19:26:03.186188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.134 [2024-07-12 19:26:03.186204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.134 [2024-07-12 19:26:03.186443] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.134 [2024-07-12 19:26:03.186667] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.134 [2024-07-12 19:26:03.186676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.134 [2024-07-12 19:26:03.186684] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.134 [2024-07-12 19:26:03.190240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.135 [2024-07-12 19:26:03.199232] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.135 [2024-07-12 19:26:03.199936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-12 19:26:03.199973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.135 [2024-07-12 19:26:03.199984] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.135 [2024-07-12 19:26:03.200230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.135 [2024-07-12 19:26:03.200455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.135 [2024-07-12 19:26:03.200464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.135 [2024-07-12 19:26:03.200472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.135 [2024-07-12 19:26:03.204019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.135 [2024-07-12 19:26:03.213225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.135 [2024-07-12 19:26:03.213842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-12 19:26:03.213880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.135 [2024-07-12 19:26:03.213892] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.135 [2024-07-12 19:26:03.214141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.135 [2024-07-12 19:26:03.214364] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.135 [2024-07-12 19:26:03.214374] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.135 [2024-07-12 19:26:03.214382] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.135 [2024-07-12 19:26:03.217927] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.135 [2024-07-12 19:26:03.227140] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.135 [2024-07-12 19:26:03.227898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-12 19:26:03.227935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.135 [2024-07-12 19:26:03.227945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.135 [2024-07-12 19:26:03.228193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.135 [2024-07-12 19:26:03.228417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.135 [2024-07-12 19:26:03.228431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.135 [2024-07-12 19:26:03.228439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.135 [2024-07-12 19:26:03.231987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.135 [2024-07-12 19:26:03.240981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.135 [2024-07-12 19:26:03.241760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-12 19:26:03.241797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.135 [2024-07-12 19:26:03.241809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.135 [2024-07-12 19:26:03.242048] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.135 [2024-07-12 19:26:03.242277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.135 [2024-07-12 19:26:03.242287] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.135 [2024-07-12 19:26:03.242295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.135 [2024-07-12 19:26:03.245843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.135 [2024-07-12 19:26:03.254837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.135 [2024-07-12 19:26:03.255485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.135 [2024-07-12 19:26:03.255523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.135 [2024-07-12 19:26:03.255535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.135 [2024-07-12 19:26:03.255776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.135 [2024-07-12 19:26:03.255999] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.135 [2024-07-12 19:26:03.256008] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.135 [2024-07-12 19:26:03.256016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.135 [2024-07-12 19:26:03.259568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.396 [2024-07-12 19:26:03.268804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.396 [2024-07-12 19:26:03.269456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.396 [2024-07-12 19:26:03.269476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.396 [2024-07-12 19:26:03.269483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.396 [2024-07-12 19:26:03.269703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.396 [2024-07-12 19:26:03.269923] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.396 [2024-07-12 19:26:03.269932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.396 [2024-07-12 19:26:03.269939] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.396 [2024-07-12 19:26:03.273484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.396 [2024-07-12 19:26:03.282679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.396 [2024-07-12 19:26:03.283402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.396 [2024-07-12 19:26:03.283439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.396 [2024-07-12 19:26:03.283450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.396 [2024-07-12 19:26:03.283689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.396 [2024-07-12 19:26:03.283912] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.396 [2024-07-12 19:26:03.283922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.396 [2024-07-12 19:26:03.283929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.396 [2024-07-12 19:26:03.287484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.396 [2024-07-12 19:26:03.296474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.396 [2024-07-12 19:26:03.297153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.396 [2024-07-12 19:26:03.297190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.396 [2024-07-12 19:26:03.297201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.396 [2024-07-12 19:26:03.297440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.396 [2024-07-12 19:26:03.297662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.396 [2024-07-12 19:26:03.297672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.396 [2024-07-12 19:26:03.297680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.396 [2024-07-12 19:26:03.301239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.396 [2024-07-12 19:26:03.310436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.396 [2024-07-12 19:26:03.311149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.396 [2024-07-12 19:26:03.311187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.396 [2024-07-12 19:26:03.311199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.396 [2024-07-12 19:26:03.311439] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.396 [2024-07-12 19:26:03.311662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.396 [2024-07-12 19:26:03.311671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.396 [2024-07-12 19:26:03.311679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.396 [2024-07-12 19:26:03.315232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.396 [2024-07-12 19:26:03.324441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.396 [2024-07-12 19:26:03.325181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.396 [2024-07-12 19:26:03.325219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.325229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.325473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.325697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.325706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.325714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.329272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.338272] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.338887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.338905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.338913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.339140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.339360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.339370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.339377] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.342917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.352110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.352790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.352828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.352838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.353077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.353310] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.353321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.353329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.356878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.366075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.366815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.366852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.366863] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.367102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.367335] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.367345] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.367356] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.370905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.379895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.380596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.380634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.380645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.380884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.381107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.381117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.381133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.384872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.393872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.394589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.394627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.394640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.394880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.395103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.395112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.395119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.398679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.407665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.408312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.408332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.408339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.408559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.408779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.408788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.408795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.412340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.421533] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.422116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.422149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.422157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.422376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.422596] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.422604] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.422611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.426154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.435346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.436021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.436058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.436069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.436317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.436541] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.436551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.436559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.440105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.449307] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.449831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.449852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.449860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.450079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.450306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.450315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.450322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.397 [2024-07-12 19:26:03.453862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.397 [2024-07-12 19:26:03.463265] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.397 [2024-07-12 19:26:03.463856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.397 [2024-07-12 19:26:03.463872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.397 [2024-07-12 19:26:03.463879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.397 [2024-07-12 19:26:03.464098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.397 [2024-07-12 19:26:03.464416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.397 [2024-07-12 19:26:03.464428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.397 [2024-07-12 19:26:03.464435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.398 [2024-07-12 19:26:03.467977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.398 [2024-07-12 19:26:03.477202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.398 [2024-07-12 19:26:03.477937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.398 [2024-07-12 19:26:03.477974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.398 [2024-07-12 19:26:03.477985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.398 [2024-07-12 19:26:03.478232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.398 [2024-07-12 19:26:03.478456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.398 [2024-07-12 19:26:03.478465] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.398 [2024-07-12 19:26:03.478473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.398 [2024-07-12 19:26:03.482019] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.398 [2024-07-12 19:26:03.491011] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.398 [2024-07-12 19:26:03.491749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.398 [2024-07-12 19:26:03.491786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.398 [2024-07-12 19:26:03.491797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.398 [2024-07-12 19:26:03.492036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.398 [2024-07-12 19:26:03.492269] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.398 [2024-07-12 19:26:03.492279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.398 [2024-07-12 19:26:03.492287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.398 [2024-07-12 19:26:03.495835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.398 [2024-07-12 19:26:03.504828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.398 [2024-07-12 19:26:03.505529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.398 [2024-07-12 19:26:03.505567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.398 [2024-07-12 19:26:03.505578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.398 [2024-07-12 19:26:03.505817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.398 [2024-07-12 19:26:03.506040] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.398 [2024-07-12 19:26:03.506050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.398 [2024-07-12 19:26:03.506058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.398 [2024-07-12 19:26:03.509622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.398 [2024-07-12 19:26:03.518822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.398 [2024-07-12 19:26:03.519449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.398 [2024-07-12 19:26:03.519469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.398 [2024-07-12 19:26:03.519477] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.398 [2024-07-12 19:26:03.519696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.398 [2024-07-12 19:26:03.519916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.398 [2024-07-12 19:26:03.519924] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.398 [2024-07-12 19:26:03.519931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.398 [2024-07-12 19:26:03.523489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.660 [2024-07-12 19:26:03.532685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.660 [2024-07-12 19:26:03.533352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-07-12 19:26:03.533389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.660 [2024-07-12 19:26:03.533400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.660 [2024-07-12 19:26:03.533639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.660 [2024-07-12 19:26:03.533862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.660 [2024-07-12 19:26:03.533871] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.660 [2024-07-12 19:26:03.533878] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.660 [2024-07-12 19:26:03.537435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.660 [2024-07-12 19:26:03.546643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.660 [2024-07-12 19:26:03.547414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-07-12 19:26:03.547452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.660 [2024-07-12 19:26:03.547463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.660 [2024-07-12 19:26:03.547702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.660 [2024-07-12 19:26:03.547925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.660 [2024-07-12 19:26:03.547935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.660 [2024-07-12 19:26:03.547942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.660 [2024-07-12 19:26:03.551500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.660 [2024-07-12 19:26:03.560489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.660 [2024-07-12 19:26:03.561223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-07-12 19:26:03.561260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.660 [2024-07-12 19:26:03.561277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.660 [2024-07-12 19:26:03.561517] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.660 [2024-07-12 19:26:03.561740] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.660 [2024-07-12 19:26:03.561749] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.660 [2024-07-12 19:26:03.561757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.660 [2024-07-12 19:26:03.565314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.660 [2024-07-12 19:26:03.574302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.660 [2024-07-12 19:26:03.574992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.660 [2024-07-12 19:26:03.575030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.660 [2024-07-12 19:26:03.575041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.660 [2024-07-12 19:26:03.575288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.660 [2024-07-12 19:26:03.575512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.660 [2024-07-12 19:26:03.575522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.660 [2024-07-12 19:26:03.575529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.579077] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.588280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.588917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.588954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.588965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.589213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.589437] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.589446] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.589454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.592999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.602204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.602933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.602971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.602981] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.603230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.603454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.603467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.603475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.607023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.616013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.616753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.616790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.616801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.617040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.617271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.617281] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.617288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.620835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.629834] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.630533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.630571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.630581] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.630820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.631044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.631054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.631061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.634617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.643815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.644395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.644433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.644444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.644683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.644907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.644916] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.644924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.648482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.657687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.658397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.658434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.658445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.658684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.658907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.658917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.658924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.662480] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.671679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.672407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.672444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.672455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.672694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.672918] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.672927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.672934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.676498] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1612068 Killed "${NVMF_APP[@]}" "$@" 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.661 [2024-07-12 19:26:03.685529] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.686231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.686269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.686281] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.686522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.686745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.686756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.686766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1613774 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1613774 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1613774 ']' 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:57.661 19:26:03 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:57.661 [2024-07-12 19:26:03.690326] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.699528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.700224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.661 [2024-07-12 19:26:03.700261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.661 [2024-07-12 19:26:03.700273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.661 [2024-07-12 19:26:03.700514] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.661 [2024-07-12 19:26:03.700737] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.661 [2024-07-12 19:26:03.700745] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.661 [2024-07-12 19:26:03.700753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.661 [2024-07-12 19:26:03.704306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.661 [2024-07-12 19:26:03.713505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.661 [2024-07-12 19:26:03.714158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-07-12 19:26:03.714195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.662 [2024-07-12 19:26:03.714207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.662 [2024-07-12 19:26:03.714449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.662 [2024-07-12 19:26:03.714672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.662 [2024-07-12 19:26:03.714681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.662 [2024-07-12 19:26:03.714689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.662 [2024-07-12 19:26:03.718248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.662 [2024-07-12 19:26:03.727468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.662 [2024-07-12 19:26:03.727948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-07-12 19:26:03.727966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.662 [2024-07-12 19:26:03.727974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.662 [2024-07-12 19:26:03.728204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.662 [2024-07-12 19:26:03.728424] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.662 [2024-07-12 19:26:03.728432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.662 [2024-07-12 19:26:03.728439] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.662 [2024-07-12 19:26:03.731988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.662 [2024-07-12 19:26:03.736372] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:29:57.662 [2024-07-12 19:26:03.736416] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.662 [2024-07-12 19:26:03.741406] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.662 [2024-07-12 19:26:03.742143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-07-12 19:26:03.742180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.662 [2024-07-12 19:26:03.742191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.662 [2024-07-12 19:26:03.742430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.662 [2024-07-12 19:26:03.742653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.662 [2024-07-12 19:26:03.742669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.662 [2024-07-12 19:26:03.742677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.662 [2024-07-12 19:26:03.746228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.662 [2024-07-12 19:26:03.755221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.662 [2024-07-12 19:26:03.755925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-07-12 19:26:03.755962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.662 [2024-07-12 19:26:03.755973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.662 [2024-07-12 19:26:03.756219] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.662 [2024-07-12 19:26:03.756443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.662 [2024-07-12 19:26:03.756451] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.662 [2024-07-12 19:26:03.756459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.662 [2024-07-12 19:26:03.760006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.662 EAL: No free 2048 kB hugepages reported on node 1 00:29:57.662 [2024-07-12 19:26:03.769207] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.662 [2024-07-12 19:26:03.769949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-07-12 19:26:03.769985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.662 [2024-07-12 19:26:03.769996] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.662 [2024-07-12 19:26:03.770251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.662 [2024-07-12 19:26:03.770474] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.662 [2024-07-12 19:26:03.770485] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.662 [2024-07-12 19:26:03.770493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.662 [2024-07-12 19:26:03.774040] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.662 [2024-07-12 19:26:03.783026] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.662 [2024-07-12 19:26:03.783766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.662 [2024-07-12 19:26:03.783802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.662 [2024-07-12 19:26:03.783814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.662 [2024-07-12 19:26:03.784057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.662 [2024-07-12 19:26:03.784290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.662 [2024-07-12 19:26:03.784300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.662 [2024-07-12 19:26:03.784308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.662 [2024-07-12 19:26:03.787854] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.796920] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.797544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.797563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.797571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.797790] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.798009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.798017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.798024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.801572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.810768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.811359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.811375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.811383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.811602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.811821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.811829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.811839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.815384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.815940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:57.925 [2024-07-12 19:26:03.824594] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.825246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.825262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.825270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.825489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.825708] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.825717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.825723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.829266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.838456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.839191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.839228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.839240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.839484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.839707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.839715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.839723] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.843278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.852274] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.853037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.853074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.853086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.853336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.853560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.853568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.853576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.857119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.866101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.866769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.866787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.866795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.867015] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.867238] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.867247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.867254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.869448] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.925 [2024-07-12 19:26:03.869471] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.925 [2024-07-12 19:26:03.869478] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.925 [2024-07-12 19:26:03.869483] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.925 [2024-07-12 19:26:03.869487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.925 [2024-07-12 19:26:03.869677] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.925 [2024-07-12 19:26:03.869797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.925 [2024-07-12 19:26:03.869799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.925 [2024-07-12 19:26:03.870799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.879991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.880679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.880718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.880729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.880971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.881201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.881210] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.881218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.884764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.893807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.894543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.894580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.894591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.894831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.895054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.895063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.895076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.898630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.907617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.908134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.908153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.908161] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.925 [2024-07-12 19:26:03.908381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.925 [2024-07-12 19:26:03.908601] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.925 [2024-07-12 19:26:03.908608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.925 [2024-07-12 19:26:03.908615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.925 [2024-07-12 19:26:03.912188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.925 [2024-07-12 19:26:03.921589] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.925 [2024-07-12 19:26:03.922213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.925 [2024-07-12 19:26:03.922250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.925 [2024-07-12 19:26:03.922261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:03.922502] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:03.922726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:03.922735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:03.922742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:03.926311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:03.935507] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:03.936201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:03.936237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:03.936249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:03.936492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:03.936715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:03.936723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:03.936731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:03.940282] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:03.949478] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:03.950230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:03.950267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:03.950279] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:03.950522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:03.950744] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:03.950753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:03.950760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:03.954317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:03.963317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:03.964068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:03.964104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:03.964114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:03.964359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:03.964583] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:03.964591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:03.964599] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:03.968151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:03.977142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:03.977828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:03.977845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:03.977853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:03.978073] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:03.978298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:03.978306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:03.978313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:03.981855] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:03.991049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:03.991619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:03.991655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:03.991666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:03.991909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:03.992140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:03.992149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:03.992156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:03.995701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:04.004895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:04.005597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:04.005634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:04.005646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:04.005884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:04.006107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:04.006115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:04.006129] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:04.009676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:04.018883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:04.019611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:04.019648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:04.019660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:04.019898] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:04.020129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:04.020139] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:04.020147] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:04.023705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:04.032698] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:04.033412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:04.033449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:04.033460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:04.033699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:04.033921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:04.033930] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:04.033941] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:04.037496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.926 [2024-07-12 19:26:04.046691] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.926 [2024-07-12 19:26:04.047300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.926 [2024-07-12 19:26:04.047319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:57.926 [2024-07-12 19:26:04.047327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:57.926 [2024-07-12 19:26:04.047546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:57.926 [2024-07-12 19:26:04.047766] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.926 [2024-07-12 19:26:04.047773] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.926 [2024-07-12 19:26:04.047780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.926 [2024-07-12 19:26:04.051324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.060515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.061113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.061156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.061169] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.061411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.061634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.061643] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.189 [2024-07-12 19:26:04.061651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.189 [2024-07-12 19:26:04.065203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.074403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.075175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.075212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.075225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.075466] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.075688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.075698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.189 [2024-07-12 19:26:04.075705] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.189 [2024-07-12 19:26:04.079262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.088252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.089014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.089055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.089066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.089313] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.089536] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.089545] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.189 [2024-07-12 19:26:04.089553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.189 [2024-07-12 19:26:04.093100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.102138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.102868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.102904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.102916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.103163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.103386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.103394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.189 [2024-07-12 19:26:04.103402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.189 [2024-07-12 19:26:04.106950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.115941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.116514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.116550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.116562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.116801] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.117023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.117032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.189 [2024-07-12 19:26:04.117039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.189 [2024-07-12 19:26:04.120594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.129803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.130397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.130434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.130445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.130685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.130913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.130921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.189 [2024-07-12 19:26:04.130929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.189 [2024-07-12 19:26:04.134482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.143673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.144426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.144463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.144474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.144712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.144934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.144943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.189 [2024-07-12 19:26:04.144951] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.189 [2024-07-12 19:26:04.148500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.157488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.157963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.157981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.157989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.158214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.158434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.158441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.189 [2024-07-12 19:26:04.158448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.189 [2024-07-12 19:26:04.161987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.189 [2024-07-12 19:26:04.171392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.189 [2024-07-12 19:26:04.172079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.189 [2024-07-12 19:26:04.172116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.189 [2024-07-12 19:26:04.172133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.189 [2024-07-12 19:26:04.172373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.189 [2024-07-12 19:26:04.172595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.189 [2024-07-12 19:26:04.172603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.172611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.176163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.185358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.186104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.186147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.186158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.186398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.186620] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.186629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.186636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.190183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.199169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.199722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.199758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.199769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.200008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.200237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.200247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.200254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.203798] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.212991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.213590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.213608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.213616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.213834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.214053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.214061] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.214068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.217612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.226811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.227414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.227430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.227442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.227661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.227879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.227887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.227894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.231442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.240663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.241413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.241450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.241461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.241700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.241922] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.241931] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.241938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.245489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.254472] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.255082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.255100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.255108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.255333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.255553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.255561] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.255567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.259105] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.268292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.268891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.268906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.268913] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.269135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.269354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.269366] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.269373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.272913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.282107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.282714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.282729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.282736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.282955] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.283178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.283186] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.283193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.286732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.295921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.296569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.296584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.296591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.296809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.297027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.297035] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.297042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.300584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.190 [2024-07-12 19:26:04.309811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.190 [2024-07-12 19:26:04.310438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.190 [2024-07-12 19:26:04.310474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.190 [2024-07-12 19:26:04.310485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.190 [2024-07-12 19:26:04.310724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.190 [2024-07-12 19:26:04.310946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.190 [2024-07-12 19:26:04.310955] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.190 [2024-07-12 19:26:04.310962] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.190 [2024-07-12 19:26:04.314514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.454 [2024-07-12 19:26:04.323730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.454 [2024-07-12 19:26:04.324511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.454 [2024-07-12 19:26:04.324548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.454 [2024-07-12 19:26:04.324558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.454 [2024-07-12 19:26:04.324797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.454 [2024-07-12 19:26:04.325019] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.454 [2024-07-12 19:26:04.325028] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.454 [2024-07-12 19:26:04.325036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.454 [2024-07-12 19:26:04.328589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.454 [2024-07-12 19:26:04.337579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.454 [2024-07-12 19:26:04.338232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.454 [2024-07-12 19:26:04.338268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.454 [2024-07-12 19:26:04.338280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.454 [2024-07-12 19:26:04.338521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.454 [2024-07-12 19:26:04.338743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.454 [2024-07-12 19:26:04.338752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.454 [2024-07-12 19:26:04.338760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.454 [2024-07-12 19:26:04.342317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.454 [2024-07-12 19:26:04.351521] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.454 [2024-07-12 19:26:04.352213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.454 [2024-07-12 19:26:04.352250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.454 [2024-07-12 19:26:04.352261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.454 [2024-07-12 19:26:04.352499] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.454 [2024-07-12 19:26:04.352722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.454 [2024-07-12 19:26:04.352730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.454 [2024-07-12 19:26:04.352737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.454 [2024-07-12 19:26:04.356296] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.454 [2024-07-12 19:26:04.365500] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.454 [2024-07-12 19:26:04.366165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.454 [2024-07-12 19:26:04.366184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.454 [2024-07-12 19:26:04.366192] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.454 [2024-07-12 19:26:04.366417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.454 [2024-07-12 19:26:04.366638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.454 [2024-07-12 19:26:04.366647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.454 [2024-07-12 19:26:04.366654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.454 [2024-07-12 19:26:04.370205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.454 [2024-07-12 19:26:04.379423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.454 [2024-07-12 19:26:04.380046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.454 [2024-07-12 19:26:04.380062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.454 [2024-07-12 19:26:04.380070] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.454 [2024-07-12 19:26:04.380297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.454 [2024-07-12 19:26:04.380516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.380524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.380531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.384069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.393281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.393883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.393900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.393907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.394131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.394352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.394359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.394366] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.397903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.407099] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.407723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.407738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.407745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.407964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.408188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.408197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.408208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.411749] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.420943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.421553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.421569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.421576] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.421794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.422013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.422020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.422026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.425610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.434818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.435415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.435452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.435464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.435703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.435928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.435937] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.435944] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.439501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.448705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.449418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.449455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.449466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.449705] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.449928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.449938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.449946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.453504] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.462501] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.463218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.463256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.463268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.463509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.463741] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.463751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.463758] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.467319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.476315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.476773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.476791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.476799] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.477019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.477243] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.477252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.477259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.480801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.490209] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.490909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.490946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.490957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.491204] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.491428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.491436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.491444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.494993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.504194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.504938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.504975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.504986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.505232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.505461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.505470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.505478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:58.455 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:58.455 19:26:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:58.455 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:58.455 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.455 [2024-07-12 19:26:04.509026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.455 [2024-07-12 19:26:04.518055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.455 [2024-07-12 19:26:04.518780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.455 [2024-07-12 19:26:04.518817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.455 [2024-07-12 19:26:04.518829] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.455 [2024-07-12 19:26:04.519068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.455 [2024-07-12 19:26:04.519300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.455 [2024-07-12 19:26:04.519309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.455 [2024-07-12 19:26:04.519317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.455 [2024-07-12 19:26:04.522865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.456 [2024-07-12 19:26:04.531869] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.456 [2024-07-12 19:26:04.532477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.456 [2024-07-12 19:26:04.532515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.456 [2024-07-12 19:26:04.532526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.456 [2024-07-12 19:26:04.532765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.456 [2024-07-12 19:26:04.532988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.456 [2024-07-12 19:26:04.532996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.456 [2024-07-12 19:26:04.533004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.456 [2024-07-12 19:26:04.536565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.456 [2024-07-12 19:26:04.545768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.456 [2024-07-12 19:26:04.546489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.456 [2024-07-12 19:26:04.546526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.456 [2024-07-12 19:26:04.546536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.456 [2024-07-12 19:26:04.546775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.456 19:26:04 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:58.456 [2024-07-12 19:26:04.547003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.456 [2024-07-12 19:26:04.547013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.456 [2024-07-12 19:26:04.547021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.456 19:26:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:58.456 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.456 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.456 [2024-07-12 19:26:04.550577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.456 [2024-07-12 19:26:04.552305] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:58.456 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.456 19:26:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:58.456 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.456 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.456 [2024-07-12 19:26:04.559573] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.456 [2024-07-12 19:26:04.560205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.456 [2024-07-12 19:26:04.560242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.456 [2024-07-12 19:26:04.560253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.456 [2024-07-12 19:26:04.560491] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.456 [2024-07-12 19:26:04.560715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.456 [2024-07-12 19:26:04.560723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.456 [2024-07-12 19:26:04.560731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.456 [2024-07-12 19:26:04.564288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.456 [2024-07-12 19:26:04.573487] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.456 [2024-07-12 19:26:04.574229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.456 [2024-07-12 19:26:04.574266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.456 [2024-07-12 19:26:04.574278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.456 [2024-07-12 19:26:04.574520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.456 [2024-07-12 19:26:04.574743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.456 [2024-07-12 19:26:04.574751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.456 [2024-07-12 19:26:04.574759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.456 [2024-07-12 19:26:04.578318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.716 Malloc0 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.716 [2024-07-12 19:26:04.587313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.716 [2024-07-12 19:26:04.588032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.716 [2024-07-12 19:26:04.588068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.716 [2024-07-12 19:26:04.588081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.716 [2024-07-12 19:26:04.588330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.716 [2024-07-12 19:26:04.588554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.716 [2024-07-12 19:26:04.588562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.716 [2024-07-12 19:26:04.588570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.716 [2024-07-12 19:26:04.592117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.716 [2024-07-12 19:26:04.601112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.716 [2024-07-12 19:26:04.601833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.716 [2024-07-12 19:26:04.601870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.716 [2024-07-12 19:26:04.601881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.716 [2024-07-12 19:26:04.602120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.716 [2024-07-12 19:26:04.602351] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.716 [2024-07-12 19:26:04.602359] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.716 [2024-07-12 19:26:04.602367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.716 [2024-07-12 19:26:04.605914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:58.716 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:58.717 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:58.717 [2024-07-12 19:26:04.614904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.717 [2024-07-12 19:26:04.615671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.717 [2024-07-12 19:26:04.615695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.717 [2024-07-12 19:26:04.615707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15183b0 with addr=10.0.0.2, port=4420 00:29:58.717 [2024-07-12 19:26:04.615720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15183b0 is same with the state(5) to be set 00:29:58.717 [2024-07-12 19:26:04.615963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15183b0 (9): Bad file descriptor 00:29:58.717 [2024-07-12 19:26:04.616198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.717 [2024-07-12 19:26:04.616207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.717 [2024-07-12 19:26:04.616215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.717 [2024-07-12 19:26:04.619762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.717 19:26:04 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:58.717 19:26:04 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1612597 00:29:58.717 [2024-07-12 19:26:04.628768] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.717 [2024-07-12 19:26:04.664671] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:08.722 00:30:08.722 Latency(us) 00:30:08.722 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:08.722 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:08.722 Verification LBA range: start 0x0 length 0x4000 00:30:08.722 Nvme1n1 : 15.05 8386.05 32.76 9645.72 0.00 7059.05 1051.31 45001.39 00:30:08.722 =================================================================================================================== 00:30:08.722 Total : 8386.05 32.76 9645.72 0.00 7059.05 1051.31 45001.39 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:08.722 rmmod nvme_tcp 00:30:08.722 rmmod nvme_fabrics 00:30:08.722 rmmod nvme_keyring 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1613774 ']' 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1613774 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1613774 ']' 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1613774 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1613774 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1613774' 00:30:08.722 killing process with pid 1613774 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1613774 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1613774 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:08.722 19:26:13 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.665 19:26:15 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:09.665 00:30:09.665 real 0m27.433s 00:30:09.665 user 1m2.874s 00:30:09.665 sys 0m6.873s 00:30:09.665 19:26:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:09.665 19:26:15 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:09.665 ************************************ 00:30:09.665 END TEST nvmf_bdevperf 00:30:09.665 ************************************ 00:30:09.665 19:26:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:09.665 19:26:15 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:09.665 19:26:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:09.665 19:26:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:09.665 19:26:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:09.665 ************************************ 00:30:09.665 START TEST nvmf_target_disconnect 00:30:09.665 ************************************ 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:09.665 * Looking for test storage... 00:30:09.665 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.665 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.925 19:26:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:09.926 19:26:15 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:16.516 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:16.516 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:16.516 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:16.516 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:16.516 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:16.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:16.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.502 ms 00:30:16.778 00:30:16.778 --- 10.0.0.2 ping statistics --- 00:30:16.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.778 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:16.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:16.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.353 ms 00:30:16.778 00:30:16.778 --- 10.0.0.1 ping statistics --- 00:30:16.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:16.778 rtt min/avg/max/mdev = 0.353/0.353/0.353/0.000 ms 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:16.778 ************************************ 00:30:16.778 START TEST nvmf_target_disconnect_tc1 00:30:16.778 ************************************ 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:16.778 EAL: No free 2048 kB hugepages reported on node 1 00:30:16.778 [2024-07-12 19:26:22.838226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:16.778 [2024-07-12 19:26:22.838303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16aee20 with addr=10.0.0.2, port=4420 00:30:16.778 [2024-07-12 19:26:22.838336] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:16.778 [2024-07-12 19:26:22.838354] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:16.778 [2024-07-12 19:26:22.838362] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:16.778 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:16.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:16.778 Initializing NVMe Controllers 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:16.778 00:30:16.778 real 0m0.108s 00:30:16.778 user 0m0.050s 00:30:16.778 sys 0m0.058s 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:16.778 ************************************ 00:30:16.778 END TEST nvmf_target_disconnect_tc1 00:30:16.778 ************************************ 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.778 19:26:22 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:17.040 ************************************ 00:30:17.040 START TEST nvmf_target_disconnect_tc2 00:30:17.040 ************************************ 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1619806 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1619806 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1619806 ']' 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.040 19:26:22 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:17.040 [2024-07-12 19:26:22.986034] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:30:17.040 [2024-07-12 19:26:22.986091] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.040 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.040 [2024-07-12 19:26:23.071022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:17.040 [2024-07-12 19:26:23.164957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.040 [2024-07-12 19:26:23.165011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.040 [2024-07-12 19:26:23.165019] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.040 [2024-07-12 19:26:23.165026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.040 [2024-07-12 19:26:23.165032] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.040 [2024-07-12 19:26:23.165732] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:17.040 [2024-07-12 19:26:23.165865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:17.040 [2024-07-12 19:26:23.166037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:17.040 [2024-07-12 19:26:23.166055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:17.982 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:17.982 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:17.982 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:17.982 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:17.982 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.982 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:17.982 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:17.982 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.983 Malloc0 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.983 [2024-07-12 19:26:23.839046] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.983 [2024-07-12 19:26:23.879400] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1619874 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:17.983 19:26:23 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:17.983 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.902 19:26:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1619806 00:30:19.902 19:26:25 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Write completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 Read completed with error (sct=0, sc=8) 00:30:19.902 starting I/O failed 00:30:19.902 [2024-07-12 19:26:25.911560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:19.902 [2024-07-12 19:26:25.912004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.912025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.912858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.912887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.913411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.913449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.913842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.913856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.914340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.914377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.914764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.914777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.915384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.915421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.915768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.915782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.916320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.916358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.902 [2024-07-12 19:26:25.916759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.902 [2024-07-12 19:26:25.916773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.902 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.917103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.917114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.917560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.917573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.917968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.917979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.918524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.918563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.918959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.918972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.919517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.919556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.919952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.919966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.920351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.920390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.920797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.920811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.921347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.921386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.921748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.921763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.922108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.922120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.922337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.922350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.922733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.922745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.923132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.923144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.923465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.923476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.923835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.923847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.924242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.924254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.924620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.924631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.925017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.925029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.925239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.925256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.925571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.925583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.925930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.925941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.926147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.926161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.926572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.926583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.926929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.926939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.927315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.927326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.927714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.927725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.928124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.928135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.928525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.928535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.928893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.928904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.929242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.929253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.929617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.929627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.929944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.929956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.930344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.930355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.931304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.931329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.931638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.931650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.931962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.931973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.932682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.932702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.903 [2024-07-12 19:26:25.933027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.903 [2024-07-12 19:26:25.933039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.903 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.933763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.933782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.934166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.934178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.934559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.934569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.934965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.934975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.935363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.935375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.935810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.935822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.936163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.936175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.936592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.936604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.936903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.936914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.937154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.937164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.937368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.937379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.937659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.937672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.938042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.938056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.938380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.938393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.938719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.938732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.939132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.939145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.939551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.939564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.939994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.940007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.940374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.940388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.940694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.940707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.940948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.940964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.941325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.941338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.941732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.941745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.942110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.942136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.942335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.942349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.942673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.942686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.943070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.943084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.943469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.943482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.943853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.943867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.944275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.944290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.944521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.944535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.944863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.944877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.945246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.945259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.945600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.945613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.946001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.946015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.946495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.946508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.946885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.946897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.947268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.947281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.947646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.947659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.948018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.904 [2024-07-12 19:26:25.948031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.904 qpair failed and we were unable to recover it. 00:30:19.904 [2024-07-12 19:26:25.948485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.948503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.948703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.948719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.949107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.949129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.949502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.949519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.949893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.949909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.950145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.950163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.950552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.950568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.950978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.950995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.951380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.951398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.951818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.951834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.952226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.952243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.952618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.952634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.953007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.953025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.953416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.953433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.953828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.953845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.954222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.954239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.954609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.954625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.955001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.955018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.955371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.955388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.955722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.955739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.956093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.956113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.956484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.956501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.956725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.956743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.957113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.957135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.957549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.957566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.957922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.957939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.958323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.958341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.958711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.958728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.959104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.959121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.959469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.959486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.959848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.959866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.960233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.960251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.960634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.960652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.961038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.961059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.961478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.961500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.961894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.961915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.962313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.962335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.962699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.962720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.963085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.963106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.905 [2024-07-12 19:26:25.963510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.905 [2024-07-12 19:26:25.963532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.905 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.963880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.963901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.964209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.964232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.964620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.964641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.965009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.965029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.965443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.965465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.965880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.965901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.966309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.966331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.966609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.966629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.966994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.967016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.967428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.967449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.967720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.967740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.968089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.968111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.968501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.968521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.968947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.968968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.969393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.969414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.969839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.969859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.970208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.970230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.970645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.970665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.971024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.971044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.971435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.971457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.971797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.971822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.972206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.972228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.972586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.972607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.973005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.973025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.973412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.973442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.973927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.973955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.974368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.974397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.974827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.974855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.975264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.975293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.906 [2024-07-12 19:26:25.975701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.906 [2024-07-12 19:26:25.975729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.906 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.976149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.976179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.976585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.976613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.977037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.977065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.977367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.977396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.977805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.977833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.978141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.978171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.978615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.978643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.979132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.979161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.979603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.979631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.980053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.980080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.980479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.980509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.980894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.980922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.981446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.981534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.981846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.981881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.982323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.982355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.982737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.982766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.983111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.983150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.983663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.983694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.984097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.984132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.984537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.984565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.984999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.985027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.985330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.985360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.985765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.985792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.986218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.986248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.986696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.986724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.986996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.987023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.987507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.987536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.987958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.987985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.988496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.988526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.988913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.988941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.989405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.989440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.989832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.989860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.990301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.990331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.990751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.990779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.991142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.991172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.991598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.991626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.992048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.992076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.992497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.992526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.992898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.992926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.993472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.907 [2024-07-12 19:26:25.993560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.907 qpair failed and we were unable to recover it. 00:30:19.907 [2024-07-12 19:26:25.994034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.994070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.994351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.994384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.994794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.994823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.995234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.995264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.995587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.995616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.996004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.996032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.996249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.996287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.996622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.996650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.997085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.997113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.997545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.997573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.997944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.997972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.998375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.998403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.998803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.998832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.999048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.999080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.999498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.999529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:25.999927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:25.999956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.000254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.000283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.000609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.000639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.001047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.001075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.001515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.001547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.001948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.001976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.002282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.002310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.002741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.002769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.003160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.003189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.003506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.003535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.003984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.004012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.004462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.004490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.004795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.004824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.005096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.005131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.005550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.005578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.005886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.005918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.006232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.006262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.006678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.006706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.007120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.007157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.007569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.007598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.008006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.008034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.008305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.008334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.008745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.008773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.009254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.009283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.009694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.009722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.908 [2024-07-12 19:26:26.010134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.908 [2024-07-12 19:26:26.010164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.908 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.010492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.010523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.010945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.010973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.011191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.011220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.011634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.011663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.012075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.012104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.012559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.012588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.013063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.013091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.013577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.013605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.014029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.014056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.014497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.014526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.014925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.014953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.015275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.015304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.015725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.015753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.016067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.016099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.016545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.016574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.016876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.016905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.017313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.017349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.017773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.017802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.018198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.018228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.018544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.018572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.019001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.019029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.019519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.019548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.019950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.019979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.020307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.020336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.020728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.020756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.021169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.021198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.021538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.021569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.021991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.022019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.022339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.022368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.022783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.022812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.023170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.023200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.023612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.023640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.024128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.024158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.024570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.024598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.025021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.025049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.025365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.025399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.025816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.025845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.026248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.026278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.026675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.026703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.026923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.909 [2024-07-12 19:26:26.026950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.909 qpair failed and we were unable to recover it. 00:30:19.909 [2024-07-12 19:26:26.027367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:19.910 [2024-07-12 19:26:26.027396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:19.910 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.027752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.027782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.028098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.028138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.028624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.028654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.028926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.028954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.029361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.029390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.029799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.029827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.030221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.030250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.030634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.030662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.031091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.031119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.031564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.031593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.032045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.032074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.032408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.032436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.032853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.032882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.033311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.033341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.033734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.033762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.034182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.034218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.034568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.034602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.035008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.035037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.035351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.035381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.035818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.035847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.036258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.036288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.036715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.036744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.037175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.037204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.037615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.037643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.038065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.038093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.181 [2024-07-12 19:26:26.038599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.181 [2024-07-12 19:26:26.038628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.181 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.039040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.039069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.039470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.039499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.039919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.039947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.040357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.040387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.040805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.040834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.041240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.041271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.041676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.041705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.042108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.042163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.042643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.042671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.042971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.042999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.043421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.043451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.043883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.043911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.044313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.044342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.044757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.044785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.045238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.045268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.045683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.045712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.046161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.046192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.046638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.046666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.047080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.047109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.047544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.047573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.047990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.048018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.048497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.048526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.048952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.048980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.049383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.049412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.049820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.049849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.050241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.050269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.050661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.050689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.051086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.051114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.051519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.051550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.051937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.051973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.052372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.052401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.052730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.052761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.053142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.053171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.053615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.053644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.053945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.053977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.054142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.182 [2024-07-12 19:26:26.054172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.182 qpair failed and we were unable to recover it. 00:30:20.182 [2024-07-12 19:26:26.054596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.054626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.055013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.055040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.055434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.055463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.055742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.055774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.056154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.056184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.056616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.056644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.056936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.056965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.057368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.057397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.057820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.057849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.058241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.058270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.058644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.058673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.059087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.059115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.059564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.059593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.060012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.060040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.060444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.060474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.060884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.060912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.061350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.061379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.061793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.061821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.062264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.062293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.062597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.062625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.063063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.063091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.063408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.063439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.063836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.063866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.064274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.064304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.064744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.064773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.065079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.065107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.065517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.065546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.065952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.065980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.066389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.066419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.066847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.066875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.067281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.067312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.067736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.067765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.068185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.068215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.068637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.068671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.069093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.069130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.069543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.069571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.069978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.183 [2024-07-12 19:26:26.070007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.183 qpair failed and we were unable to recover it. 00:30:20.183 [2024-07-12 19:26:26.070382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.070411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.070828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.070856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.071209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.071238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.071662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.071691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.072120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.072158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.072578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.072607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.073021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.073049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.073453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.073483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.073900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.073929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.074333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.074362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.074761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.074790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.075216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.075246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.075704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.075733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.076151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.076181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.076589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.076618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.077009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.077039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.077341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.077379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.077782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.077811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.078200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.078229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.078648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.078677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.079087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.079117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.079541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.079571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.079915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.079943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.080276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.080306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.080695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.080724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.081151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.081181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.081602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.081632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.082050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.082080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.082498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.082528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.082929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.082958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.083379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.083409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.083695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.083726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.084131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.084161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.084496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.084525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.084948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.084976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.085374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.184 [2024-07-12 19:26:26.085404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.184 qpair failed and we were unable to recover it. 00:30:20.184 [2024-07-12 19:26:26.085823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.085858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.086280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.086308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.086714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.086742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.087159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.087188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.087619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.087648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.088060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.088088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.088500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.088530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.088949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.088977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.089371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.089400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.089684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.089716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.090107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.090147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.090568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.090597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.091016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.091045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.091527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.091557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.092014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.092042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.092454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.092483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.092901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.092930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.093343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.093373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.093789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.093817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.094245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.094274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.094719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.094750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.095133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.095164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.095612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.095640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.096051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.096080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.096531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.096559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.096978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.097007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.097427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.097457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.097874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.097903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.098325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.098354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.098774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.098804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.099203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.099233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.099654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.099683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.100093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.100121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.185 [2024-07-12 19:26:26.100519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.185 [2024-07-12 19:26:26.100549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.185 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.100939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.100968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.101364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.101394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.101800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.101830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.102251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.102281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.102716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.102744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.103152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.103197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.103614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.103650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.104029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.104056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.104355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.104386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.104791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.104819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.105227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.105256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.105696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.105725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.106156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.106185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.106614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.106641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.107067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.107094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.107520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.107550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.107969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.107998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.108427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.108456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.108867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.108895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.109301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.109330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.109747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.109776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.110180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.110208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.110625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.110653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.111080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.111109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.111532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.111561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.111980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.112009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.112428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.112457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.112849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.112877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.113248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.113277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.113703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.113730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.114140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.114169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.114604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.114634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.115049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.115077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.115407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.115440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.115856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.115885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.186 [2024-07-12 19:26:26.116308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.186 [2024-07-12 19:26:26.116338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.186 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.116765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.116792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.117208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.117237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.117655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.117682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.118103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.118139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.118534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.118563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.119043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.119071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.119505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.119534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.119936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.119965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.120381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.120411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.120763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.120791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.121244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.121280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.121702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.121729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.122155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.122184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.122596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.122624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.123034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.123062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.123492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.123521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.123944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.123973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.124375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.124404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.124708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.124737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.125151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.125182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.125575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.125604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.126036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.126064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.126465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.126495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.126911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.126938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.127284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.127313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.127720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.127748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.128192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.128220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.128638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.128667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.128967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.128996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.129447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.129475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.129889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.129918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.130343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.130372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.130693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.130721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.131141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.131171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.131474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.131505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.131936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.131965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.132385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.132414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.132818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.132847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.133256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.133286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.133695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.187 [2024-07-12 19:26:26.133725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.187 qpair failed and we were unable to recover it. 00:30:20.187 [2024-07-12 19:26:26.134120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.134158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.134583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.134613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.135019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.135048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.135406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.135436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.135864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.135893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.136289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.136318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.136724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.136753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.137178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.137207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.137632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.137660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.138064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.138092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.138505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.138540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.138918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.138948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.139296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.139326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.139736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.139764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.140177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.140206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.140628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.140656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.141081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.141109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.141563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.141592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.142002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.142031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.142379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.142408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.142793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.142821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.143233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.143262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.143678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.143706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.144133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.144162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.144561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.144589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.145004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.145032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.145446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.145476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.145898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.145926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.146237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.146271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.146703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.146731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.147139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.147170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.147579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.147608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.148024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.148051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.148468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.148497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.148913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.148940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.149340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.188 [2024-07-12 19:26:26.149370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.188 qpair failed and we were unable to recover it. 00:30:20.188 [2024-07-12 19:26:26.149798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.149827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.150237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.150268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.150660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.150688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.151110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.151165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.151606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.151634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.151934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.151975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.152379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.152408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.152824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.152852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.153277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.153306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.153717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.153746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.154157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.154187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.154602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.154631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.155053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.155081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.155480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.155509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.155917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.155952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.156372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.156401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.156689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.156718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.157128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.157157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.157531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.157559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.157978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.158007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.158442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.158471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.158883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.158911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.159322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.159352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.159752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.159781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.160249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.160279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.160691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.160721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.161150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.161179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.161621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.161649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.162042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.162070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.162500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.162530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.162930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.162959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.163381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.163411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.163822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.163851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.164275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.164306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.164731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.164761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.165046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.165076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.165506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.165537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.165956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.165985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.166414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.166444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.166874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.166903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.189 [2024-07-12 19:26:26.167348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.189 [2024-07-12 19:26:26.167377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.189 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.167787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.167816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.168228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.168258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.168694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.168722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.169184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.169213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.169521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.169550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.169863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.169896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.170326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.170355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.170704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.170732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.171140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.171169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.171579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.171607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.173423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.173479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.173926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.173956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.174369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.174401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.174812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.174850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.175269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.175299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.175738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.175766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.176052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.176082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.176490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.176520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.176879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.176907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.177321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.177351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.177754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.177782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.178176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.178206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.178635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.178664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.178972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.179005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.179412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.179441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.179854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.179882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.180315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.180345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.180743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.180773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.181196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.181225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.181638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.181666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.182070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.182100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.182544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.182574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.182987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.183017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.183467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.190 [2024-07-12 19:26:26.183497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.190 qpair failed and we were unable to recover it. 00:30:20.190 [2024-07-12 19:26:26.183927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.183956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.184375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.184404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.184816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.184845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.185274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.185303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.185699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.185728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.186080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.186109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.186549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.186578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.186883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.186916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.187341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.187371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.187810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.187838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.188273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.188303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.188711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.188741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.189169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.189198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.189682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.189711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.190184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.190216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.190639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.190669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.191062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.191093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.191524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.191553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.191956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.191986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.192413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.192450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.192877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.192906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.193314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.193343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.193754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.193783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.194211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.194242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.194666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.194695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.195033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.195064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.195316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.195349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.195665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.195696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.196036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.196069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.196503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.196534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.196947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.196977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.197460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.197490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.197909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.197938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.198351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.198381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.198778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.198808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.199228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.199259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.199703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.199731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.200060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.200089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.200538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.200567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.201024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.201052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.191 qpair failed and we were unable to recover it. 00:30:20.191 [2024-07-12 19:26:26.201467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.191 [2024-07-12 19:26:26.201497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.201911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.201940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.202350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.202379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.202733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.202762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.203080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.203111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.203558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.203587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.203997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.204026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.204440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.204469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.204902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.204931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.205300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.205331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.205758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.205787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.206297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.206326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.206628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.206657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.206973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.207001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.207488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.207518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.207930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.207959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.208323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.208352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.208779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.208809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.209234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.209264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.209688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.209722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.210157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.210188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.210552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.210582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.210994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.211023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.211440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.211469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.211868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.211897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.212376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.212404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.212869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.212898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.213208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.213238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.213673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.213702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.214161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.214191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.214581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.214609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.215031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.215060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.215495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.215524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.215898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.215927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.216346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.216380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.216792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.216821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.217138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.217167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.217618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.217646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.218103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.218140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.218460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.218489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.218963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.218992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.192 qpair failed and we were unable to recover it. 00:30:20.192 [2024-07-12 19:26:26.219459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.192 [2024-07-12 19:26:26.219488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.219909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.219938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.220271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.220301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.220652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.220681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.221110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.221148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.221571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.221600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.221908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.221936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.222299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.222328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.222727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.222756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.223119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.223156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.223575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.223604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.224078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.224106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.224437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.224466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.224883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.224910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.225369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.225400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.225832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.225861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.226407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.226505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.226990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.227025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.227440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.227473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.227873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.227903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.228321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.228351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.228779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.228808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.229117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.229168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.229626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.229656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.229971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.230009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.230318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.230354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.230853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.230882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.231320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.231349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.231775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.231803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.232202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.232231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.232569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.232598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.233063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.233091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.233581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.233611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.234039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.234068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.193 [2024-07-12 19:26:26.234550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.193 [2024-07-12 19:26:26.234580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.193 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.234988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.235017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.235430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.235462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.235807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.235836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.236271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.236300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.236696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.236724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.237158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.237189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.237597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.237627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.238038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.238069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.238545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.238576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.239004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.239033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.239500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.239537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.239944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.239973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.240393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.240422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.240895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.240923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.241344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.241373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.241787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.241815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.242201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.242231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.242635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.242663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.242943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.242973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.243405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.243436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.243870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.243898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.244365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.244395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.244845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.244873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.245234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.245266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.245669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.245697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.246167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.246198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.246693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.246721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.247145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.247177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.247516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.247544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.247990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.248018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.248339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.248370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.248786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.248815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.249245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.249274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.249731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.194 [2024-07-12 19:26:26.249760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.194 qpair failed and we were unable to recover it. 00:30:20.194 [2024-07-12 19:26:26.250170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.250199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.250627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.250655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.250986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.251015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.251496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.251524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.252004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.252034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.252457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.252488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.252895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.252926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.253337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.253368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.253779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.253808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.254244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.254275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.254600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.254631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.255064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.255093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.255559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.255591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.256009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.256039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.256407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.256438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.256870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.256901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.257266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.257311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.257727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.257757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.258156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.258188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.258649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.258680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.259106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.259146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.259624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.259655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.260054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.260085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.260468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.260500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.260905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.260935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.261376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.261409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.261789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.261819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.262227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.262258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.262741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.262771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.263191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.263223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.263644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.263674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.264099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.264138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.264569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.264600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.264995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.265025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.265464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.265495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.265805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.265835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.266246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.266277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.266732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.266762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.267164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.267195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.267672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.267702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.195 qpair failed and we were unable to recover it. 00:30:20.195 [2024-07-12 19:26:26.268142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.195 [2024-07-12 19:26:26.268173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.268615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.268646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.269078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.269108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.269656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.269686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.270103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.270143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.270472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.270503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.270906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.270934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.271358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.271452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.271952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.271989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.272356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.272389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.272701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.272729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.273160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.273192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.273543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.273581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.273998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.274028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.274453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.274484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.274889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.274919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.275331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.275372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.275782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.275812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.276220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.276250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.276685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.276714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.277046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.277073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.277533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.277565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.277927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.277955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.278320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.278353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.278827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.278856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.279287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.279318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.279546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.279578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.279959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.279988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.280438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.280469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.280823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.280855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.281182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.281212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.281649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.281678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.282111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.282149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.282535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.282569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.283008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.283038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.283463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.283494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.283795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.283822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.284164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.284199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.284637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.284666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.285154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.285184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.285641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.196 [2024-07-12 19:26:26.285670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.196 qpair failed and we were unable to recover it. 00:30:20.196 [2024-07-12 19:26:26.285988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.286016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.286522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.286552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.286992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.287022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.287445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.287476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.287791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.287822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.288196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.288230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.288649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.288678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.289027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.289056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.289519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.289550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.289854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.289886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.290300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.290330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.290744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.290773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.291185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.291217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.291679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.291708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.292120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.292159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.292640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.292676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.293086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.293115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.293513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.293543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.293856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.293886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.294291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.294323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.294827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.294856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.295326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.295357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.295754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.295784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.296197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.296227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.296671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.296700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.297103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.297142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.297593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.297623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.298030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.298059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.298380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.298414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.298743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.298773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.299248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.299280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.299688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.299718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.300024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.300056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.300488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.300519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.300938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.300968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.197 [2024-07-12 19:26:26.301395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.197 [2024-07-12 19:26:26.301426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.197 qpair failed and we were unable to recover it. 00:30:20.468 [2024-07-12 19:26:26.301882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.468 [2024-07-12 19:26:26.301913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.468 qpair failed and we were unable to recover it. 00:30:20.468 [2024-07-12 19:26:26.302282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.468 [2024-07-12 19:26:26.302314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.468 qpair failed and we were unable to recover it. 00:30:20.468 [2024-07-12 19:26:26.302739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.468 [2024-07-12 19:26:26.302768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.303197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.303228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.303681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.303712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.304121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.304162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.304602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.304632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.305056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.305086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.305449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.305479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.305901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.305930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.306353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.306384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.306805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.306834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.307250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.307279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.307727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.307757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.308060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.308092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.308513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.308545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.308967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.308997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.309431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.309461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.309892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.309922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.310437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.310473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.310889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.310918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.311376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.311406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.311712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.311739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.312177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.312207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.312629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.312658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.313076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.313104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.313534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.313564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.313983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.314012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.314447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.314478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.314902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.314932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.315349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.315380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.315790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.315819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.316234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.316264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.316582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.316616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.317043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.317072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.317411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.469 [2024-07-12 19:26:26.317442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.469 qpair failed and we were unable to recover it. 00:30:20.469 [2024-07-12 19:26:26.317858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.317886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.318316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.318346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.318783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.318812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.319273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.319305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.319754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.319783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.320216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.320246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.320674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.320703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.321141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.321172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.321602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.321631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.322067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.322096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.322547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.322577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.323003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.323033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.323449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.323479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.323909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.323938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.324363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.324394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.324813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.324841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.325265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.325297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.325723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.325753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.326185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.326216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.326637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.326668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.327094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.327134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.327571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.327601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.327982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.328010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.328438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.328474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.328947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.328975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.329295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.329324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.329760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.329790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.330186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.330216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.330686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.330716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.331142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.331173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.331660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.331689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.332105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.332145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.332470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.332500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.332938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.332967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.333305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.470 [2024-07-12 19:26:26.333338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.470 qpair failed and we were unable to recover it. 00:30:20.470 [2024-07-12 19:26:26.333758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.333787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.334092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.334136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.334608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.334638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.335071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.335101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.335526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.335556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.335912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.335942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.336276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.336306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.336737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.336766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.337188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.337220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.337661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.337690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.338145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.338177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.338633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.338663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.339081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.339110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.339536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.339566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.340001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.340030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.340446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.340476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.340897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.340926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.341346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.341376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.341803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.341832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.342261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.342292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.342720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.342749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.343171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.343203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.343645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.343675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.344101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.344139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.344553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.344582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.344937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.344968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.345264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.345298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.345761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.345792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.346217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.346259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.346706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.346736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.347178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.347209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.347637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.347665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.348092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.348131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.471 [2024-07-12 19:26:26.348528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.471 [2024-07-12 19:26:26.348558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.471 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.348996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.349026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.349444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.349477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.349897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.349927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.350350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.350380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.350795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.350826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.351264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.351294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.351717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.351746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.352212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.352242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.352665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.352695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.353142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.353173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.353601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.353632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.354046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.354076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.354524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.354556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.354984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.355014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.355452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.355484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.355946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.355975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.356382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.356413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.356846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.356875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.357301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.357331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.357751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.357781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.358184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.358214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.358589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.358620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.359039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.359070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.359409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.359446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.359877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.359907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.360329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.360359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.360775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.360804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.361230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.361261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.361681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.361711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.362146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.362176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.362605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.362635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.362959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.362987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.363283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.363316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.363742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.363772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.364087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.364136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.472 [2024-07-12 19:26:26.364521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.472 [2024-07-12 19:26:26.364551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.472 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.364976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.365006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.365425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.365457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.365818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.365849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.366272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.366303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.366738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.366767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.367213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.367243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.367673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.367703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.368142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.368173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.368649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.368678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.369138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.369170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.369646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.369676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.370100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.370140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.370445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.370476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.370945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.370975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.371385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.371416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.371899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.371929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.372381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.372412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.372863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.372893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.373403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.373434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.373862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.373892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.374421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.374525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.375030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.375067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.375529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.375562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.375991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.376021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.376493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.376526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.376966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.376997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.377454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.377487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.473 [2024-07-12 19:26:26.377792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.473 [2024-07-12 19:26:26.377820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.473 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.378151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.378190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.378692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.378724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.379194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.379225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.379653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.379683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.380139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.380172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.380628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.380657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.381088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.381117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.381454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.381489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.381940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.381970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.382370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.382401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.382831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.382871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.383297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.383330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.383765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.383794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.384233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.384263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.384701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.384732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.385160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.385191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.385629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.385660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.386102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.386142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.386570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.386600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.387028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.387058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.387355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.387389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.387827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.387859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.388170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.388202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.388635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.388664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.389109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.389155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.389629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.389659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.390098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.390138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.390540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.390570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.391010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.391040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.391480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.391511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.391935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.391966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.474 [2024-07-12 19:26:26.392393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.474 [2024-07-12 19:26:26.392424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.474 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.392876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.392905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.393350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.393381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.393696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.393728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.394160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.394191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.394637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.394667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.395148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.395181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.395600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.395630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.396062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.396091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.396530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.396562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.396995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.397025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.397485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.397517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.397947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.397978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.398416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.398447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.398726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.398760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.399187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.399220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.399645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.399675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.400113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.400154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.400629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.400659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.401088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.401133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.401587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.401617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.402008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.402038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.402443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.402474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.402897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.402927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.403359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.403391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.403803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.403832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.404270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.404301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.404733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.404762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.405071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.405100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.405565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.405597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.406033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.406063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.406491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.406521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.406992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.475 [2024-07-12 19:26:26.407023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.475 qpair failed and we were unable to recover it. 00:30:20.475 [2024-07-12 19:26:26.407434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.407466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.407833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.407864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.408287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.408317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.408744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.408774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.409213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.409243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.409706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.409735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.410165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.410198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.410630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.410660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.411030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.411060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.411514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.411545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.411976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.412006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.412448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.412481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.412919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.412948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.413389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.413421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.413847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.413878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.414315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.414345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.414816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.414846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.415284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.415338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.415803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.415834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.416303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.416335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.416784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.416814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.417266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.417297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.417731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.417762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.418183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.418213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.418606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.418636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.419070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.419100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.419498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.419536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.419960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.419991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.420420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.420452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.420904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.420934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.421360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.421393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.421811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.421844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.422363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.422394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.422703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.476 [2024-07-12 19:26:26.422737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.476 qpair failed and we were unable to recover it. 00:30:20.476 [2024-07-12 19:26:26.423053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.423091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.423546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.423577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.424032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.424063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.424502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.424533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.424851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.424885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.425265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.425296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.425630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.425664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.426139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.426170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.426496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.426527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.426984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.427013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.427424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.427456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.427899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.427929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.428367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.428399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.428784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.428813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.429211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.429241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.429655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.429685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.430105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.430147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.430601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.430630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.431083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.431112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.431584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.431616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.432041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.432070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.432483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.432514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.432950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.432981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.433432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.433463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.433881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.433912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.434285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.434317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.434774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.434805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.435257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.435288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.435717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.435747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.436179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.436208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.436616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.436646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.437076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.437105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.437543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.437585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.438005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.438036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.438470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.438502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.438953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.438985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.477 [2024-07-12 19:26:26.439423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.477 [2024-07-12 19:26:26.439457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.477 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.439879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.439909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.440219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.440248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.440719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.440749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.441170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.441200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.441652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.441681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.441999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.442029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.442335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.442368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.442791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.442821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.443260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.443291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.443739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.443770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.444245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.444277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.444718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.444749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.445187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.445218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.445656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.445686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.446009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.446042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.446470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.446502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.446904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.446933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.447378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.447409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.447785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.447816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.448221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.448252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.448690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.448719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.449187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.449217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.449723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.449753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.450183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.450215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.450648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.450677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.451114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.451156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.451613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.451642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.451952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.451982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.452442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.452472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.452917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.452947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.453402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.453432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.453841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.453870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.454234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.454266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.478 qpair failed and we were unable to recover it. 00:30:20.478 [2024-07-12 19:26:26.454706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.478 [2024-07-12 19:26:26.454735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.455177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.455208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.455639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.455668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.456096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.456137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.456530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.456560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.456995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.457024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.457454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.457485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.457915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.457945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.458359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.458391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.458827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.458856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.459286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.459317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.459740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.459770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.460079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.460107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.460556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.460586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.461014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.461044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.461464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.461495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.461935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.461964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.462406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.462437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.462870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.462899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.463328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.463359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.463805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.463834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.464271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.464301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.464733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.464763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.465276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.465307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.465745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.465773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.466212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.466243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.466677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.466706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.467084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.467114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.467485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.467519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.467965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.479 [2024-07-12 19:26:26.468005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.479 qpair failed and we were unable to recover it. 00:30:20.479 [2024-07-12 19:26:26.468452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.468483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.468907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.468936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.469379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.469410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.469848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.469877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.470200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.470236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.470650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.470681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.471118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.471159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.471534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.471564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.471862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.471896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.472395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.472425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.472876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.472907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.473332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.473363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.473790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.473820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.474247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.474279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.474722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.474753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.475197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.475230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.475679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.475710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.476148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.476178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.476623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.476653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.476957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.476989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.477425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.477456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.477889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.477919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.478359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.478389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.478837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.478867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.479300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.479332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.479755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.479785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.480189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.480222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.480676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.480705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.481146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.481178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.481609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.481639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.482083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.482116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.482569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.482599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.483012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.480 [2024-07-12 19:26:26.483043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.480 qpair failed and we were unable to recover it. 00:30:20.480 [2024-07-12 19:26:26.483471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.483502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.483936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.483966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.484388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.484419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.484840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.484871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.485245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.485277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.485714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.485745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.486181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.486217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.486643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.486673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.487099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.487140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.487546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.487575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.488032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.488061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.488491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.488523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.488948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.488977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.489416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.489448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.489887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.489917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.490338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.490370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.490798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.490828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.491266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.491296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.491616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.491649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.492091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.492120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.492585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.492616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.492988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.493017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.493467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.493497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.493922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.493952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.494376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.494406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.494848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.494877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.495322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.495351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.495786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.495815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.496243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.496274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.496731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.496761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.497200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.497233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.497652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.497683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.498117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.498156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.498639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.498669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.499115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.499155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.499508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.499538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.481 qpair failed and we were unable to recover it. 00:30:20.481 [2024-07-12 19:26:26.499951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.481 [2024-07-12 19:26:26.499980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.500428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.500459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.500896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.500926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.501356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.501387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.501827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.501857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.502337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.502368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.502789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.502819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.503259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.503289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.503717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.503746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.504190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.504221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.504669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.504706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.505146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.505177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.505488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.505520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.505962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.505991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.506433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.506464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.506899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.506928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.507399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.507429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.507874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.507903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.508339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.508369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.508796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.508827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.509257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.509288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.509727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.509758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.510193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.510223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.510652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.510682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.511110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.511150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.511514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.511544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.511987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.512017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.512395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.512427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.512713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.512744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.513171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.513202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.513641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.513670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.514098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.514138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.514541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.514571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.515015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.515045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.482 [2024-07-12 19:26:26.515511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.482 [2024-07-12 19:26:26.515542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.482 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.515968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.515999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.516425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.516456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.516909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.516940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.517378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.517409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.517834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.517864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.518295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.518325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.518763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.518793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.519239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.519294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.519719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.519751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.520179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.520211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.520663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.520693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.521153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.521185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.521608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.521637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.522066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.522096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.522539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.522569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.522878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.522920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.523361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.523393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.523817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.523847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.524259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.524289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.524620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.524653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.525082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.525112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.525536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.525567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.526010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.526040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.526492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.526522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.526949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.526979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.527428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.527459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.527906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.527937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.528371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.528403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.528828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.528859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.529184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.529218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.529654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.529684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.483 [2024-07-12 19:26:26.530120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.483 [2024-07-12 19:26:26.530161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.483 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.530615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.530645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.530950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.530982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.531399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.531430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.531868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.531898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.532330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.532360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.532746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.532775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.533171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.533204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.533599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.533629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.534059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.534089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.534503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.534534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.534972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.535002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.535431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.535462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.535880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.535910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.536337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.536367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.536811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.536840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.537287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.537319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.537742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.537772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.538096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.538136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.538519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.538550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.538978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.539008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.539449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.539480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.539909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.539938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.540393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.540424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.540860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.540896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.541318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.541349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.541778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.541807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.542247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.542278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.484 [2024-07-12 19:26:26.542707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.484 [2024-07-12 19:26:26.542738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.484 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.543061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.543092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.543552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.543585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.544022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.544051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.544515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.544545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.545017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.545047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.545479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.545509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.545947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.545976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.546418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.546448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.546824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.546855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.547331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.547362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.547806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.547835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.548279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.548310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.548740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.548769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.549205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.549234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.549683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.549713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.550032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.550065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.550424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.550455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.550877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.550907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.551230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.551261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.551661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.551690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.552120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.552163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.552585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.552615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.552919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.552955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.553373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.553403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.553830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.553859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.554300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.554332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.554765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.554796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.555258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.555289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.555591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.555623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.556055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.556086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.556533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.556564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.557013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.557041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.557511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.557543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.485 qpair failed and we were unable to recover it. 00:30:20.485 [2024-07-12 19:26:26.557966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.485 [2024-07-12 19:26:26.557996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.558366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.558397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.558830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.558866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.559301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.559334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.559760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.559790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.560213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.560245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.560702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.560733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.561166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.561197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.561633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.561662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.562107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.562167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.562630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.562660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.563094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.563132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.563554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.563583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.564030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.564060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.564509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.564539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.564976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.565005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.565453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.565484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.565923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.565953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.566496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.566601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.567061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.567099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.567568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.567600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.568040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.568070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.568516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.568547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.568977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.569006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.569447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.569479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.569922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.569952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.570391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.570423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.570745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.570776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.571281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.571312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.571754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.571784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.572229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.572260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.572701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.572732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.486 [2024-07-12 19:26:26.573044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.486 [2024-07-12 19:26:26.573073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.486 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.573488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.573519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.573953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.573983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.574400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.574432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.574882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.574911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.575361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.575392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.575719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.575749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.576185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.576215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.576669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.576698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.577104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.577154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.577585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.577620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.578041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.578070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.578485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.578516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.578930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.578959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.579398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.579429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.579815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.579845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.580283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.580314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.580773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.580803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.581248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.581279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.581711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.581743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.582188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.582221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.582520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.582550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.582976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.583006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.583406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.583436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.583754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.583784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.584206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.584238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.584701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.584730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.585165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.585197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.585653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.585683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.586141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.586172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.586598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.586627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.587075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.587104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.587546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.587576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.588016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.588047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.487 [2024-07-12 19:26:26.588519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.487 [2024-07-12 19:26:26.588551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.487 qpair failed and we were unable to recover it. 00:30:20.488 [2024-07-12 19:26:26.589045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.488 [2024-07-12 19:26:26.589080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.488 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.589516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.589551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.589909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.589943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.590394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.590425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.590842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.590872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.591301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.591331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.591799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.591830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.592265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.592296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.592752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.592782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.593178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.593208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.593646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.593675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.594118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.594160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.594655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.594686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.595143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.595174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.595519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.595557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.595994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.596031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.596456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.596489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.596932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.596962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.597406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.597436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.597881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.597910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.598424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.598454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.598863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.598892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.599325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.599356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.599634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.599664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.759 [2024-07-12 19:26:26.600090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.759 [2024-07-12 19:26:26.600120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.759 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.600506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.600536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.600985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.601015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.601432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.601463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.601773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.601805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.602189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.602221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.602520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.602549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.602818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.602847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.603273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.603303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.603752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.603782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.604228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.604258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.604704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.604733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.605173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.605203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.605613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.605643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.606085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.606116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.606536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.606566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.606998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.607027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.607442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.607474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.607913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.607944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.608386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.608418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.608845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.608874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.609287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.609319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.609722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.609751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.610200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.610231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.610659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.610689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.611135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.611166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.611600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.611630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.612067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.612097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.612523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.612554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.612986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.613017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.613460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.613490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.613928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.613964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.614393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.614424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.614832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.614860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.760 [2024-07-12 19:26:26.615311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.760 [2024-07-12 19:26:26.615342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.760 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.615755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.615784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.616208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.616238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.616693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.616723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.617095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.617146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.617485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.617518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.617969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.618001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.618427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.618458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.618922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.618952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.619270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.619303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.619746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.619776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.620214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.620246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.620708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.620738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.621158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.621188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.621607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.621637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.621941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.621972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.622423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.622454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.622895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.622924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.623362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.623394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.623831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.623861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.624182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.624215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.624700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.624729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.625163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.625195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.625654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.625683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.626074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.626106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.626528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.626559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.626992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.627022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.627447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.627479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.627915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.627945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.628359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.628390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.628826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.628857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.629294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.629326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.629761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.629791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.630233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.630264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.630689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.630720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.631165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.761 [2024-07-12 19:26:26.631198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.761 qpair failed and we were unable to recover it. 00:30:20.761 [2024-07-12 19:26:26.631508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.631539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.632002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.632039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.632487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.632517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.632977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.633006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.633456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.633487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.633926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.633955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.634391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.634421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.634730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.634761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.635194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.635226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.635671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.635701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.636143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.636173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.636625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.636655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.637061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.637091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.637541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.637573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.637998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.638027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.638472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.638504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.638960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.638989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.639418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.639448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.639758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.639794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.640197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.640228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.640643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.640672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.641120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.641165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.641658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.641688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.641994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.642025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.642431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.642463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.642830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.642860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.643286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.643317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.643758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.643787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.644242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.644272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.644719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.644748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.645180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.645211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.645517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.762 [2024-07-12 19:26:26.645548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.762 qpair failed and we were unable to recover it. 00:30:20.762 [2024-07-12 19:26:26.646009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.646039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.646371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.646403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.646771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.646800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.647243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.647273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.647723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.647753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.648197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.648228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.648659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.648688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.649152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.649183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.649614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.649646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.650082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.650120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.650587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.650619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.651070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.651101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.651545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.651576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.651903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.651935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.652368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.652398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.652880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.652910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.653411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.653517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.653994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.654031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.654338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.654377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.654828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.654859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.655294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.655327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.655778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.655808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.656273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.656305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.656767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.656798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.657235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.657267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.657709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.657739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.658175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.658206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.658650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.658681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.659111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.659152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.659541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.659571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.659980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.660010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.660435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.763 [2024-07-12 19:26:26.660466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.763 qpair failed and we were unable to recover it. 00:30:20.763 [2024-07-12 19:26:26.660913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.660943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.661399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.661431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.661855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.661886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.662323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.662356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.662793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.662824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.663269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.663299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.663754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.663784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.664213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.664244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.664665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.664694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.665142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.665173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.665499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.665527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.665977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.666007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.666414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.666446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.666885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.666917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.667348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.667380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.667750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.667779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.668217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.668247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.668570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.668606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.669041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.669070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.669547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.669579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.670011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.670041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.670527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.670557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.670989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.671019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.764 qpair failed and we were unable to recover it. 00:30:20.764 [2024-07-12 19:26:26.671462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.764 [2024-07-12 19:26:26.671494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.671941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.671972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.672341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.672373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.672795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.672824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.673276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.673307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.673756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.673786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.674225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.674255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.674705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.674736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.675186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.675217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.675653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.675682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.676137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.676169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.676634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.676663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.677104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.677158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.677631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.677660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.678092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.678121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.678573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.678602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.679045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.679074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.679534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.679566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.680002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.680032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.680472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.680503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.680940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.680970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.681439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.681471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.681896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.681926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.682376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.682408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.682861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.682890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.683433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.683537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.684084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.684121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.684593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.684624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.685065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.685094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.685535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.685567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.686000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.686030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.686364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.686395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.686818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.686850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.687296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.687326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.687765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.687807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.688227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.688259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.688592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.688628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.689077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.765 [2024-07-12 19:26:26.689107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.765 qpair failed and we were unable to recover it. 00:30:20.765 [2024-07-12 19:26:26.689638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.689669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.690080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.690110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.690560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.690590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.691031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.691061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.691526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.691557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.691875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.691909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.692327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.692358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.692787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.692817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.693258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.693289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.693717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.693746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.694179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.694211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.694499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.694528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.694894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.694924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.695335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.695366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.695806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.695836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.696327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.696357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.696745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.696775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.697229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.697260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.697718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.697748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.698076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.698110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.698575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.698607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.699044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.699075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.699398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.699431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.699871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.699907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.700220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.700250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.700651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.700683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.701145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.701178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.701619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.701651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.702135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.702168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.702624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.702653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.703096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.703139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.703513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.703542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.703848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.703885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.766 [2024-07-12 19:26:26.704238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.766 [2024-07-12 19:26:26.704270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.766 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.704722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.704753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.705185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.705217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.705659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.705691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.706139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.706172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.706591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.706620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.707093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.707133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.707572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.707602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.708029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.708059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.708483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.708514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.708956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.708986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.709433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.709464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.709892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.709921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.710367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.710399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.710834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.710864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.711190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.711222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.711649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.711680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.712055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.712085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.712527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.712558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.712994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.713024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.713463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.713495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.713897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.713925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.714374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.714404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.714843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.714873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.715251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.715283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.715722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.715752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.716228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.716258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.716699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.716729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.717159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.717190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.717509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.717541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.717945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.717982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.718428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.718458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.718894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.718924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.719372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.719402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.719842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.719872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.767 [2024-07-12 19:26:26.720317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.767 [2024-07-12 19:26:26.720347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.767 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.720785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.720814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.721168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.721204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.721681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.721712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.722021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.722052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.722489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.722520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.722962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.722992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.723428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.723459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.723912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.723942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.724376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.724406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.724845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.724875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.725319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.725349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.725791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.725821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.726249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.726280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.726733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.726763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.727215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.727267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.727716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.727747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.728062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.728095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.728547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.728579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.729019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.729048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.729495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.729525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.729959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.729988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.730428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.730460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.730897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.730926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.731296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.731326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.732751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.732806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.733257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.733291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.733769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.733801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.734237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.734268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.734702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.734733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.735196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.735227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.735667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.735697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.736899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.736946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.737409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.737442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.737874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.768 [2024-07-12 19:26:26.737904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.768 qpair failed and we were unable to recover it. 00:30:20.768 [2024-07-12 19:26:26.738370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.738419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.738866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.738897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.739373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.739405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.739857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.739887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.740326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.740359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.740798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.740828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.741263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.741294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.741739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.741768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.742178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.742210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.742644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.742675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.743105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.743148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.743624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.743654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.744098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.744145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.744621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.744652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.745083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.745113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.745512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.745542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.745849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.745883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.746346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.746378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.746820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.746849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.747311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.747342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.747782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.747813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.748255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.748286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.748712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.748742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.749183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.749213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.749655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.749685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.769 qpair failed and we were unable to recover it. 00:30:20.769 [2024-07-12 19:26:26.750141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.769 [2024-07-12 19:26:26.750174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.750617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.750646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.751084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.751114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.751472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.751503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.751939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.751968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.752424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.752456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.752853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.752882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.753338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.753368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.753806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.753835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.754268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.754300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.754740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.754779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.755215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.755246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.755680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.755710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.756146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.756177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.756606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.756640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.757076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.757112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.757563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.757594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.758028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.758057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.758505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.758536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.758849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.758885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.759323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.759354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.759672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.759706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.760155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.760186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.760616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.760645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.761085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.761115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.761541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.761572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.762012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.762042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.762467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.762498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.762996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.763027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.763505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.763536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.763924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.763954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.764422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.764453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.764895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.764925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.765446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.770 [2024-07-12 19:26:26.765551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.770 qpair failed and we were unable to recover it. 00:30:20.770 [2024-07-12 19:26:26.766087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.766144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.766579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.766611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.767053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.767084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.767532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.767564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.767993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.768022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Read completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 Write completed with error (sct=0, sc=8) 00:30:20.771 starting I/O failed 00:30:20.771 [2024-07-12 19:26:26.768395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:20.771 [2024-07-12 19:26:26.768884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.768909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.769447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.769510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.769962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.769980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.770470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.770533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.770962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.770977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.771445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.771508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.771953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.771969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.772471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.772535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.772965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.772981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.773486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.773549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.773962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.773978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.774498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.774563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.774997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.775014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.775497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.775511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.775911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.775924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.776474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.776537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.776962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.776978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.777487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.771 [2024-07-12 19:26:26.777550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.771 qpair failed and we were unable to recover it. 00:30:20.771 [2024-07-12 19:26:26.777995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.778011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.778372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.778386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.778798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.778812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.779214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.779227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.779674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.779687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.780099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.780118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.780505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.780518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.780920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.780933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.781438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.781500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.781947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.781963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.782399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.782460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.782904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.782920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.783452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.783515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.783966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.783981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.784492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.784557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.785040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.785056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.785477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.785491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.785830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.785844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.786387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.786451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.786899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.786915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.787425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.787488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.787945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.787961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.788500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.788565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.789011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.789026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.789445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.789459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.789866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.789879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.790409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.790472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.790923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.790939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.791456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.791519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.791913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.791929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.792423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.792487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.792900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.792916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.793426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.793495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.793987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.794003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.794385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.794400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.772 [2024-07-12 19:26:26.794800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.772 [2024-07-12 19:26:26.794813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.772 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.795211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.795224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.795622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.795636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.796052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.796065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.796479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.796493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.796890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.796904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.797305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.797319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.797734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.797747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.798143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.798157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.798473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.798485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.798886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.798898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.799322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.799335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.799759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.799771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.800227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.800241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.800639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.800652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.801075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.801087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.801465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.801478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.801871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.801884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.802279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.802296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.802668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.802683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.803150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.803162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.803544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.803556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.803953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.803965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.804378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.804392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.804790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.804806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.805200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.805213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.805609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.805622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.806041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.806055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.806457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.806471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.806898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.806912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.807309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.807322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.807745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.807758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.808151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.808165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.808502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.808516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.808909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.773 [2024-07-12 19:26:26.808922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.773 qpair failed and we were unable to recover it. 00:30:20.773 [2024-07-12 19:26:26.809338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.809351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.809747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.809759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.810153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.810167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.810563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.810576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.810996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.811008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.811470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.811484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.811828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.811839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.812241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.812255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.812643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.812657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.813052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.813064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.813460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.813472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.813877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.813891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.814305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.814318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.814726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.814738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.815118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.815136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.815512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.815525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.815940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.815955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.816460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.816519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.816964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.816979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.817470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.817530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.817951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.817966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.818459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.818519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.818958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.818973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.819469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.819527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.819954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.819969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.820477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.820535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.820975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.820990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.821497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.821557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.821982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.821996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.822385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.774 [2024-07-12 19:26:26.822440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.774 qpair failed and we were unable to recover it. 00:30:20.774 [2024-07-12 19:26:26.822878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.822895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.823409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.823468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.823896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.823911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.824420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.824479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.824866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.824881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.825376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.825436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.825807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.825822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.826258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.826270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.826658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.826671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.827066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.827078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.827460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.827474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.827870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.827884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.828281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.828296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.828693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.828707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.829049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.829064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.829464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.829477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.829870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.829882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.830280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.830292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.830720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.830732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.831144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.831157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.831545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.831564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.831973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.831987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.832395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.832408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.832799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.832811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.833228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.833242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.833642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.833656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.834069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.834081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.834436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.834451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.834840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.834853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.835249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.835261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.835632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.835644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.775 qpair failed and we were unable to recover it. 00:30:20.775 [2024-07-12 19:26:26.836034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.775 [2024-07-12 19:26:26.836046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.836356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.836368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.836754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.836766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.837175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.837187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.837592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.837604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.837999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.838011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.838425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.838437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.838851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.838863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.839264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.839277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.839667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.839678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.840072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.840084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.840496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.840510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.840900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.840914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.841305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.841319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.841712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.841725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.842129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.842142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.842503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.842515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.842906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.842918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.843427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.843482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.843896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.843911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.844296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.844353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.844758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.844772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.845112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.845136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.845501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.845521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.845797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.845809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.846202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.846215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.846608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.846620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.847019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.847031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.847434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.847446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.847838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.847850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.848241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.848253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.848613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.848625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.776 [2024-07-12 19:26:26.848959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.776 [2024-07-12 19:26:26.848972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.776 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.849371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.849383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.849776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.849789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.850080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.850094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.850391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.850405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.850700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.850713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.851105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.851118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.851407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.851418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.851811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.851824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.852215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.852229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.852486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.852499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.852909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.852921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.853304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.853316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.853706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.853718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.854131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.854143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.854530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.854542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.854933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.854945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.855438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.855495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.855879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.855901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.856397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.856453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.856884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.856901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.857294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.857307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.857572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.857584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.857997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.858009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.858584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.858601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.858998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.859011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.859389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.859404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.859792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.859807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.860089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.860104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.860504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.860518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.860927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.860942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.861237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.861250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.777 [2024-07-12 19:26:26.861668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.777 [2024-07-12 19:26:26.861681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.777 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.862125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.862138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.862522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.862534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.862782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.862795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.863183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.863196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.863474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.863486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.863872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.863884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.864292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.864304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.864692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.864704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.865095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.865108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.865500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.865514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.865934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.865946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.866339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.866352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.866642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.866654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.867038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.867050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.867442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.867454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.867840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.867852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.868242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.868254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.868651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.868663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.869072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.869086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.869529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.869541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.869927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.869940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.870434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.870487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.870903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.870918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.871425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.871477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.871875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.871890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.872266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.872280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.872626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.872644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.873054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.873065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.873451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.873464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.873847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.873859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.874248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.874260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.874672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.874684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.875062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.778 [2024-07-12 19:26:26.875074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.778 qpair failed and we were unable to recover it. 00:30:20.778 [2024-07-12 19:26:26.875461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.875475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.875878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.875889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.876289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.876302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.876702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.876715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.877106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.877119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.877519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.877532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.877813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.877824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.878207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.878219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.878609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.878621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:20.779 [2024-07-12 19:26:26.879009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.779 [2024-07-12 19:26:26.879021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:20.779 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-12 19:26:26.879420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-12 19:26:26.879432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-12 19:26:26.879827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-12 19:26:26.879840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-12 19:26:26.880215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-12 19:26:26.880227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-12 19:26:26.880694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.059 [2024-07-12 19:26:26.880707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.059 qpair failed and we were unable to recover it. 00:30:21.059 [2024-07-12 19:26:26.881098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.881112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.881575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.881589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.881987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.882001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.882383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.882396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.882775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.882787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.883175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.883187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.883458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.883473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.883886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.883898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.884276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.884289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.884574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.884585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.884983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.884996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.885406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.885419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.885808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.885820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.886207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.886219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.886638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.886649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.887023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.887035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.887410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.887423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.887809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.887821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.888215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.888228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.888641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.888653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.889034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.889046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.889435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.889448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.889822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.889834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.889995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.890009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.890364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.890377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.890761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.890772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.891170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.891182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.891564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.891578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.891968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.891980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.892366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.892378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.892796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.892808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.893214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.893226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.893456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.893471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.893857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.893876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.894264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.894276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.894687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.894699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.895084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.895096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.895477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.895490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.895883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.895895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.896302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.896314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.060 [2024-07-12 19:26:26.896699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.060 [2024-07-12 19:26:26.896712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.060 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.897095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.897108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.897494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.897507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.897915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.897928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.898419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.898470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.898868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.898882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.899260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.899273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.899677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.899690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.900078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.900090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.900480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.900493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.900880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.900892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.901412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.901464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.901864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.901880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.902203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.902216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.902623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.902635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.902929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.902943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.903403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.903415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.903801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.903813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.904200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.904212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.904597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.904609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.904928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.904942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.905346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.905359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.905743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.905755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.906119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.906137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.906491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.906502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.906893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.906905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.907417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.907467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.907875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.907890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.908280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.908293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.908697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.908709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.909067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.909080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.909492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.909504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.909785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.909796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.910268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.910280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.910657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.910670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.911273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.911285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.911653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.911664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.912079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.912091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.912485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.912497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.912881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.912894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.913120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.913142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.913468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.061 [2024-07-12 19:26:26.913482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.061 qpair failed and we were unable to recover it. 00:30:21.061 [2024-07-12 19:26:26.913775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.913788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.914172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.914183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.914590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.914601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.914988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.915001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.915417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.915431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.915721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.915735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.916139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.916153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.916532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.916547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.916938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.916952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.917338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.917351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.917773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.917786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.918137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.918151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.918598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.918611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.919021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.919034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.919415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.919429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.919814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.919828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.920231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.920245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.920614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.920627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.921013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.921026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.921307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.921323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.921701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.921715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.922046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.922059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.922434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.922449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.922832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.922845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.923240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.923254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.923662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.923676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.924060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.924073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.924449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.924462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.924852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.924865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.925269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.925284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.925669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.925682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.926067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.926081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.926456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.926470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.926879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.926894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.927221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.927235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.927559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.927573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.927898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.927911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.928297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.928311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.928693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.928704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.929089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.929101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.929490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.062 [2024-07-12 19:26:26.929503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.062 qpair failed and we were unable to recover it. 00:30:21.062 [2024-07-12 19:26:26.929911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.929925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.930313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.930326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.930769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.930781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.931157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.931170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.931467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.931479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.931865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.931878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.932263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.932277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.932666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.932678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.933034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.933045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.933436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.933450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.933836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.933848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.934032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.934043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.934314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.934327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.934740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.934752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.935139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.935151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.935490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.935501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.935912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.935923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.936312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.936325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.936545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.936559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.936943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.936955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.937386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.937400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.937785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.937798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.938185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.938198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.938573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.938586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.938988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.939000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.939402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.939414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.939688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.939699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.940085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.940097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.940350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.940362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.940672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.940684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.941067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.941079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.941470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.941482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.941885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.941900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.942335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.942347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.942743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.942755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.943142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.943155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.943543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.943554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.943939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.943951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.944418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.944430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.944817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.944829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.945114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.063 [2024-07-12 19:26:26.945132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.063 qpair failed and we were unable to recover it. 00:30:21.063 [2024-07-12 19:26:26.945500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.945512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.945900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.945913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.946412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.946460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.946852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.946866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.947343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.947391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.947814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.947829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.948220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.948234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.948609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.948620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.949004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.949017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.949411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.949423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.949744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.949755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.950174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.950187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.950588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.950599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.951014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.951026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.951393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.951406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.951808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.951819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.952202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.952215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.952643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.952655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.953049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.953061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.953450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.953462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.953855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.953867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.954250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.954263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.954644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.954655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.955057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.955068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.955395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.955407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.955762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.955773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.956145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.956159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.956532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.956544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.956803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.956814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.957195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.957208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.957486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.957497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.957747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.957760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.958213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.064 [2024-07-12 19:26:26.958227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.064 qpair failed and we were unable to recover it. 00:30:21.064 [2024-07-12 19:26:26.958600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.958611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.959028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.959040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.959387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.959400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.959666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.959678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.960061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.960074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.960456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.960468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.960867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.960880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.961263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.961276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.961661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.961672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.962057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.962068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.962444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.962457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.962836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.962847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.963227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.963239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.963647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.963659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.964069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.964080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.964467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.964479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.964859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.964872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.965258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.965270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.965682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.965694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.965967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.965978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.966367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.966380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.966782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.966793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.967193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.967206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.967596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.967607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.967927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.967938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.968338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.968350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.968752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.968766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.969149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.969160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.969532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.969544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.969927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.969938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.970332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.970343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.970724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.970737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.971110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.971128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.971498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.971512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.971839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.971852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.972239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.972251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.972619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.972631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.973011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.973023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.973406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.973419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.973791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.973803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.974191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.974203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.065 qpair failed and we were unable to recover it. 00:30:21.065 [2024-07-12 19:26:26.974622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.065 [2024-07-12 19:26:26.974633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.975039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.975050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.975443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.975455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.975837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.975849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.976225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.976239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.976567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.976579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.976956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.976968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.977353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.977365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.977771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.977783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.978181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.978192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.978582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.978593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.978973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.978984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.979447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.979461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.979711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.979723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.980105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.980117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.980513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.980525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.980898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.980910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.981418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.981463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.981927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.981941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.982456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.982500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.982891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.982905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.983429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.983472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.983865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.983879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.984376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.984420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.984745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.984758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.985147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.985160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.985376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.985391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.985776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.985787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.986168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.986180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.986556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.986567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.986948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.986959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.987343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.987354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.987731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.987742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.988109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.988120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.988392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.988403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.988784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.988796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.989192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.989205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.989605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.989617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.989990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.990001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.990401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.990413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.990629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.990641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.066 [2024-07-12 19:26:26.991030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.066 [2024-07-12 19:26:26.991041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.066 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.991433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.991444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.991691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.991702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.992084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.992095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.992401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.992413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.992800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.992811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.993179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.993191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.993574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.993585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.993998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.994009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.994408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.994419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.994799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.994810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.995192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.995203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.995577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.995589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.995968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.995981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.996366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.996378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.996755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.996766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.997167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.997178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.997581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.997592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.998047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.998059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.998433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.998444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.998847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.998858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.999240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.999252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.999528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.999540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:26.999923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:26.999935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.000313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.000325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.000721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.000732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.001117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.001135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.001487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.001499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.001899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.001910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.002289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.002301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.002667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.002678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.003061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.003072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.003359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.003371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.003753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.003765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.004041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.004053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.004372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.004383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.004789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.004800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.005180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.005192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.005575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.005586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.006004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.006017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.006420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.006432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.067 [2024-07-12 19:26:27.006811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.067 [2024-07-12 19:26:27.006822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.067 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.007206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.007219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.007601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.007613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.008011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.008022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.008430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.008442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.008820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.008830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.009211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.009223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.009628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.009639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.010018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.010028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.010394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.010406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.010790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.010801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.011197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.011209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.011492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.011504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.011885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.011897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.012269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.012281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.012734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.012746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.013156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.013168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.013556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.013567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.013784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.013796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.014135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.014147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.014530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.014541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.014921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.014932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.015317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.015328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.015728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.015739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.016125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.016137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.016352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.016369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.016770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.016783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.017149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.017163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.017513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.017524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.017903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.017914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.018259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.018271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.018644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.018655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.019048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.019058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.019440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.019451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.019832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.019843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.020246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.068 [2024-07-12 19:26:27.020257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.068 qpair failed and we were unable to recover it. 00:30:21.068 [2024-07-12 19:26:27.020674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.020686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.021058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.021070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.021505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.021517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.021655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.021667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.022069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.022081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.022460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.022472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.022850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.022862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.023261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.023273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.023657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.023669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.024047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.024059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.024436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.024448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.024792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.024804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.025198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.025210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.025588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.025599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.025977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.025988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.026383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.026395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.026782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.026792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.027172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.027184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.027562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.027574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.027947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.027959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.028337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.028349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.028662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.028672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.029044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.029055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.029434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.029445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.029822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.029833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.030210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.030221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.030467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.030479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.030876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.030887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.031259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.031270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.031654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.031664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.032049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.032061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.032434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.032446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.032823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.032835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.033204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.033215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.033596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.033607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.034005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.034015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.034263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.034273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.034661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.034673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.035037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.035049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.035428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.035440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.035823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.035834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.069 qpair failed and we were unable to recover it. 00:30:21.069 [2024-07-12 19:26:27.036211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.069 [2024-07-12 19:26:27.036223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.036535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.036546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.036948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.036959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.037343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.037355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.037646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.037658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.038039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.038050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.038421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.038432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.038810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.038821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.039203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.039222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.039600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.039610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.040006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.040016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.040474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.040485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.040854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.040864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.041236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.041251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.041624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.041636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.042021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.042033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.042431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.042445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.042894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.042906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.043284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.043295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.043691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.043701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.044080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.044091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.044504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.044515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.044926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.044937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.045409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.045451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.045835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.045849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.046249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.046263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.046670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.046682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.046928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.046941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.047327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.047338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.047621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.047632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.048036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.048047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.048438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.048449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.048882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.048893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.049275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.049286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.049686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.049696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.050073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.050083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.050536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.050547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.050923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.050935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.051432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.051475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.051864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.051878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.052259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.052271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.070 [2024-07-12 19:26:27.052663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.070 [2024-07-12 19:26:27.052674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.070 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.053022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.053034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.053408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.053426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.053801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.053811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.054188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.054200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.054562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.054572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.054952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.054964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.055355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.055367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.055751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.055763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.056159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.056170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.056573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.056585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.056962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.056973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.057356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.057368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.057774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.057785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.058165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.058177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.058557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.058568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.058949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.058960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.059181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.059198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.059573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.059585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.059971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.059982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.060361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.060372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.060615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.060626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.061004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.061015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.061256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.061268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.061649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.061659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.062060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.062071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.062460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.062471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.062847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.062857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.063236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.063246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.063600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.063613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.063994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.064004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.064404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.064415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.064812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.064824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.065220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.065232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.065613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.065625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.066002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.066012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.066354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.066365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.066800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.066811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.067186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.067197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.067624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.067634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.068002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.068013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.068412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.071 [2024-07-12 19:26:27.068423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.071 qpair failed and we were unable to recover it. 00:30:21.071 [2024-07-12 19:26:27.068804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.068816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.069198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.069209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.069618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.069630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.070031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.070042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.070426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.070438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.070819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.070830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.071114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.071130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.071511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.071522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.071902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.071912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.072296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.072306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.072684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.072695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.073128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.073139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.073488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.073499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.073812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.073824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.074227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.074239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.074602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.074613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.075013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.075024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.075460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.075471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.075840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.075850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.076255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.076266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.076650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.076660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.077041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.077052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.077442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.077454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.077850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.077861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.078242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.078253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.078667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.078677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.079053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.079064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.079442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.079454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.079830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.079841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.080206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.080217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.080597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.080609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.081011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.081022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.081300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.081311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.081689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.081700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.082080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.072 [2024-07-12 19:26:27.082091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.072 qpair failed and we were unable to recover it. 00:30:21.072 [2024-07-12 19:26:27.082457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.082469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.082847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.082857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.083067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.083081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.083463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.083475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.083875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.083887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.084263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.084274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.084647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.084657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.085036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.085046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.085375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.085386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.085764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.085775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.086151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.086162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.086537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.086548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.086952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.086962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.087280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.087291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.087677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.087687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.087997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.088008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.088386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.088397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.088603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.088615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.088974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.088985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.089362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.089373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.089774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.089788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.090166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.090177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.090552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.090563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.090942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.090953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.091352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.091363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.091740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.091750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.092133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.092144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.092542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.092553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.092913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.092925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.093406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.093445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.093817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.093830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.094209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.094221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.094620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.094631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.095010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.095020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.095422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.095434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.095805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.095815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.096218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.096229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.096624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.096634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.097010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.097020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.097411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.097422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.097856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.073 [2024-07-12 19:26:27.097868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.073 qpair failed and we were unable to recover it. 00:30:21.073 [2024-07-12 19:26:27.098234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.098246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.098623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.098635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.099011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.099022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.099406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.099417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.099796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.099807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.100182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.100193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.100448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.100461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.100885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.100896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.101264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.101275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.101649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.101659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.102040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.102050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.102432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.102444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.102825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.102837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.103219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.103230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.103622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.103632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.103996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.104007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.104400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.104411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.104788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.104799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.105210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.105220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.105572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.105582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.105894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.105904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.106276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.106287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.106660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.106670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.107077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.107087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.107446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.107458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.107829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.107840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.108217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.108228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.108438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.108453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.108823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.108833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.109209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.109219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.109598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.109609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.109989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.110000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.110394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.110405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.110778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.110789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.111004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.111016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.111400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.111411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.111785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.111795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.112152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.112163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.112554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.112566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.112995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.113006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.113246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.074 [2024-07-12 19:26:27.113258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.074 qpair failed and we were unable to recover it. 00:30:21.074 [2024-07-12 19:26:27.113638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.113649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.114026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.114036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.114405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.114417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.114793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.114804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.115180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.115192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.115566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.115576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.115971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.115982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.116361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.116372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.116749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.116761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.117135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.117146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.117527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.117538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.117917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.117928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.118170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.118181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.118554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.118565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.118963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.118973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.119337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.119347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.119725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.119736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.120112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.120127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.120511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.120522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.120780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.120792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.121155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.121166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.121549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.121560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.121954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.121964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.122344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.122355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.122734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.122744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.123124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.123136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.123506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.123517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.123898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.123908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.124394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.124433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.124800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.124813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.125212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.125223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.125610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.125621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.125997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.126007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.126349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.126364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.126759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.126770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.127146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.127158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.127525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.127535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.127819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.127831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.128235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.128245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.128631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.128641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.129017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.129028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.075 qpair failed and we were unable to recover it. 00:30:21.075 [2024-07-12 19:26:27.129418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.075 [2024-07-12 19:26:27.129429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.129825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.129835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.130208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.130220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.130596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.130606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.130873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.130883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.131291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.131302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.131684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.131695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.132074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.132085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.132459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.132470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.132667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.132680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.133053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.133063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.133329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.133339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.133723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.133733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.134142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.134153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.134429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.134439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.134813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.134823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.135200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.135211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.135606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.135616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.135992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.136002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.136363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.136376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.136755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.136766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.137160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.137171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.137570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.137581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.137955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.137967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.138339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.138351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.138747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.138758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.139135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.139146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.139398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.139410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.139793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.139804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.140199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.140210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.140605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.140615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.140992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.141003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.141358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.141368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.141772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.141783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.142158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.142170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.142550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.142561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.142936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.142948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.143327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.143338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.143706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.143716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.144088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.144098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.144436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.144446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.076 qpair failed and we were unable to recover it. 00:30:21.076 [2024-07-12 19:26:27.144801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.076 [2024-07-12 19:26:27.144811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.145189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.145200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.145576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.145586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.145959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.145969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.146365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.146376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.146755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.146767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.147136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.147147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.147538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.147548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.147943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.147953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.148329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.148341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.148695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.148707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.149078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.149090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.149489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.149501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.149875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.149886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.150262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.150273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.150643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.150655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.151050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.151062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.151439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.151450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.151825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.151836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.152208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.152218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.152613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.152624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.152998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.153009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.153217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.153229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.153573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.153583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.153980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.153990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.154386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.154397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.154771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.154782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.155150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.155162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.155570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.155580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.155946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.155956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.156330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.156340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.156714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.156725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.157120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.157134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.157489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.157500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.157804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.157816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.158187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.077 [2024-07-12 19:26:27.158198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.077 qpair failed and we were unable to recover it. 00:30:21.077 [2024-07-12 19:26:27.158572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.158582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.159023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.159033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.159398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.159409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.159784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.159794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.160159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.160169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.160560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.160570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.160943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.160953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.161328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.161338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.161731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.161742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.162116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.162135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.162512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.162524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.162896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.162907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.163403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.163440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.163822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.163835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.164312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.164324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.164713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.164724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.165098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.165108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.165402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.165412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.165794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.165805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.166191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.166202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.166571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.166581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.166958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.166968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.167369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.167381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.167705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.167716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.168110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.168126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.168426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.168436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.168806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.168816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.169193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.169203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.169629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.169639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.078 [2024-07-12 19:26:27.170018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.078 [2024-07-12 19:26:27.170028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.078 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.170400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.170411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.170779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.170790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.171074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.171085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.171371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.171383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.171765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.171776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.172154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.172165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.172557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.172567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.173016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.173028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.173312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.173323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.173706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.173716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.174113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.174126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.174536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.174547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.174919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.174929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.175285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.175295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.175698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.175709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.176085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.176096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.176534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.176545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.176924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.176935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.177339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.177377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.177784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.177797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.356 qpair failed and we were unable to recover it. 00:30:21.356 [2024-07-12 19:26:27.178020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.356 [2024-07-12 19:26:27.178034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.178430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.178442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.178720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.178731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.179107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.179117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.179517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.179529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.179913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.179924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.180315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.180326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.180758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.180768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.181128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.181139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.181488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.181499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.181896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.181907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.182422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.182460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.182844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.182857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.183288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.183302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.183610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.183625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.183942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.183953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.184335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.184346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.184721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.184731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.185094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.185104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.185481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.185492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.185765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.185775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.186160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.186170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.186538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.186549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.186819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.186830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.187203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.187214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.187587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.187598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.188001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.188013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.188378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.188390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.188864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.188874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.357 qpair failed and we were unable to recover it. 00:30:21.357 [2024-07-12 19:26:27.189240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.357 [2024-07-12 19:26:27.189251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.189656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.189666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.190040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.190050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.190433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.190445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.190809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.190821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.191144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.191155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.191528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.191538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.191882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.191892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.192267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.192278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.192646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.192656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.193071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.193081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.193449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.193460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.193762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.193773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.194179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.194190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.194560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.194570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.194998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.195008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.195291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.195300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.195666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.195676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.196053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.196063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.196425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.196436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.196810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.196820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.197193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.197204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.197585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.197595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.197969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.197980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.198361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.198373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.198738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.198749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.199180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.199191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.199565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.199575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.199950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.199960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.200416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.200454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.200837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.358 [2024-07-12 19:26:27.200850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.358 qpair failed and we were unable to recover it. 00:30:21.358 [2024-07-12 19:26:27.201323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.201361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.201674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.201688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.202145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.202157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.202555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.202565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.203034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.203045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.203424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.203435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.203837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.203847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.204205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.204216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.204459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.204470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.204891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.204901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.205276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.205288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.205592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.205603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.206017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.206028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.206425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.206437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.206840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.206851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.207229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.207240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.207535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.207546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.207930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.207941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.208338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.208349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.208723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.208733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.209108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.209118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.209489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.209500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.209868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.209882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.210259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.210270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.210709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.210720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.210995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.211007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.211369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.211381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.211754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.211764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.212139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.212149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.212532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.359 [2024-07-12 19:26:27.212542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.359 qpair failed and we were unable to recover it. 00:30:21.359 [2024-07-12 19:26:27.212776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.212788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.213200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.213211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.213636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.213646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.214026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.214036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.214441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.214452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.214827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.214838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.215213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.215225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.215618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.215629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.216022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.216033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.216398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.216409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.216772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.216782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.217151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.217162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.217522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.217533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.217908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.217919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.218291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.218302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.218676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.218686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.219077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.219087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.219463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.219474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.219840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.219850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.220226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.220239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.220596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.220606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.220982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.220993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.221365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.221377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.221756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.221767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.222168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.222180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.222569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.222580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.222954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.222964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.223342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.223352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.360 [2024-07-12 19:26:27.223745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.360 [2024-07-12 19:26:27.223755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.360 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.224083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.224093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.224465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.224476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.224748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.224758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.225151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.225161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.225564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.225574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.225936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.225947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.226321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.226332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.226724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.226734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.227110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.227120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.227505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.227515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.227891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.227901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.228400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.228438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.228818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.228832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.229224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.229236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.229610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.229622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.230026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.230038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.230440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.230450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.230824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.230838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.231213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.231224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.231597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.231607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.231933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.231944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.232317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.232328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.232542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.232558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.232916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.232926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.233231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.233243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.233632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.233643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.234027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.234037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.234403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.234414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.234791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.234801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.235188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.235199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.361 [2024-07-12 19:26:27.235574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.361 [2024-07-12 19:26:27.235584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.361 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.235985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.235995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.236386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.236398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.236770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.236782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.237156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.237167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.237534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.237544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.237919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.237930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.238305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.238316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.238689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.238700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.239005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.239016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.239360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.239371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.239744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.239754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.240130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.240141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.240464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.240474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.240851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.240861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.241251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.241263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.241643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.241654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.242052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.242063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.242387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.242397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.242760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.242770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.243147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.243158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.243524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.243534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.243966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.243976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.244342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.362 [2024-07-12 19:26:27.244354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.362 qpair failed and we were unable to recover it. 00:30:21.362 [2024-07-12 19:26:27.244731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.244741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.245094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.245104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.245483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.245495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.245851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.245863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.246237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.246249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.246594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.246605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.246983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.246993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.247360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.247371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.247743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.247754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.248148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.248159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.248550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.248560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.248934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.248945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.249324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.249334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.249727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.249737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.250110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.250121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.250493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.250503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.250876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.250886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.251390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.251428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.251816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.251830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.252209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.252221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.252607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.252618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.253021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.253032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.253434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.253445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.253820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.253830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.254206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.254217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.254614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.254624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.254998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.255008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.255289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.255304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.255683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.255694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.363 qpair failed and we were unable to recover it. 00:30:21.363 [2024-07-12 19:26:27.256096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.363 [2024-07-12 19:26:27.256106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.256469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.256479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.256851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.256866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.257239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.257250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.257603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.257613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.257821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.257832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.258211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.258222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.258503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.258515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.258882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.258893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.259258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.259269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.259649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.259659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.260033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.260043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.260395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.260406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.260779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.260790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.261206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.261217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.261485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.261496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.261891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.261903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.262278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.262289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.262662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.262673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.263046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.263056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.263422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.263432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.263806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.263816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.264190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.264200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.264576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.264586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.264957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.264968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.265341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.265352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.265613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.265624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.265991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.266002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.266393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.266404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.364 qpair failed and we were unable to recover it. 00:30:21.364 [2024-07-12 19:26:27.266769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.364 [2024-07-12 19:26:27.266782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.267153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.267164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.267548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.267558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.267955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.267965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.268338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.268349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.268728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.268739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.269113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.269127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.269504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.269515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.269887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.269899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.270275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.270286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.270661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.270671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.271070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.271080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.271444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.271455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.271719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.271729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.272102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.272113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.272480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.272491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.272918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.272928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.273361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.273399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.273783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.273796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.274202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.274213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.274603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.274615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.274990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.275000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.275358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.275370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.275770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.275780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.276152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.276163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.276536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.276547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.276929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.276940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.277314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.277326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.277678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.277690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.278105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.278116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.278502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.278513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.365 qpair failed and we were unable to recover it. 00:30:21.365 [2024-07-12 19:26:27.278907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.365 [2024-07-12 19:26:27.278917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.279398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.279436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.279890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.279903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.280390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.280428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.280824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.280837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.281213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.281225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.281623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.281633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.282010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.282020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.282406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.282416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.282794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.282804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.283183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.283194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.283566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.283577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.283974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.283986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.284379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.284390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.284765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.284775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.285146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.285157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.285529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.285539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.285913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.285923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.286286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.286297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.286603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.286614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.286957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.286967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.287340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.287351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.287725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.287735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.288112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.288125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.288503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.288514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.288888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.288898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.289404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.289442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.289826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.289838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.290183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.290195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.290591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.290602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.290970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.290982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.366 [2024-07-12 19:26:27.291357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.366 [2024-07-12 19:26:27.291368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.366 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.291582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.291595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.291974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.291984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.292356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.292367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.292740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.292750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.293177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.293188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.293586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.293601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.293976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.293986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.294359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.294369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.294768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.294779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.295154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.295165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.295612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.295623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.296043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.296053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.296420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.296430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.296722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.296732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.297112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.297125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.297513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.297523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.297917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.297927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.298291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.298302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.298670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.298680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.299105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.299116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.299326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.299339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.299657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.299668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.300043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.300053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.300432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.300443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.300836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.300847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.301223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.301234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.301621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.301632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.302007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.367 [2024-07-12 19:26:27.302018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.367 qpair failed and we were unable to recover it. 00:30:21.367 [2024-07-12 19:26:27.302436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.302446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.302820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.302831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.303201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.303212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.303591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.303602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.303984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.303998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.304265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.304275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.304642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.304654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.304984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.304994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.305465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.305475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.305847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.305859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.306326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.306364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.306778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.306791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.307220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.307231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.307621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.307632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.308009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.308021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.308405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.308416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.308764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.308775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.309049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.309059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.309432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.309443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.309815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.309826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.310179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.310190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.310448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.310458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.310824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.310834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.311202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.311212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.311563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.311574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.311946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.311956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.312328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.312339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.312614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.368 [2024-07-12 19:26:27.312624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.368 qpair failed and we were unable to recover it. 00:30:21.368 [2024-07-12 19:26:27.312867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.312877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.313332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.313343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.313742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.313753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.314030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.314044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.314417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.314428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.314823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.314835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.315203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.315215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.315646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.315657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.316061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.316071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.316462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.316473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.316845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.316855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.317235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.317245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.317597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.317607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.317978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.317988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.318359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.318371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.318742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.318753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.319151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.319162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.319599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.319610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.319982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.319993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.320364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.320375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.320779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.320789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.321169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.321180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.321395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.321409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.321792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.321803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.322151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.322162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.322562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.322572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.322944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.322955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.323330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.323342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.323746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.323757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.324134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.324146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.324488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.324498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.324875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.324885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.369 qpair failed and we were unable to recover it. 00:30:21.369 [2024-07-12 19:26:27.325272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.369 [2024-07-12 19:26:27.325283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.325744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.325755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.326039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.326050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.326328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.326339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.326742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.326753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.327131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.327141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.327488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.327498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.327872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.327882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.328290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.328301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.328549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.328560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.328929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.328940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.329227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.329237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.329611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.329622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.329996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.330007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.330330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.330342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.330722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.330732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.331129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.331141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.331390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.331400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.331778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.331788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.332172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.332183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.332551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.332561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.332875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.332886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.333256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.333267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.333639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.333649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.334045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.334055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.334338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.334349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.334727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.334737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.335112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.335126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.335498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.335509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.335886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.370 [2024-07-12 19:26:27.335896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.370 qpair failed and we were unable to recover it. 00:30:21.370 [2024-07-12 19:26:27.336235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.336249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.336621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.336631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.337002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.337014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.337300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.337311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.337522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.337534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.337907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.337918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.338261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.338272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.338619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.338630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.338909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.338921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.339198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.339212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.339584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.339595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.339875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.339885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.340257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.340268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.340679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.340689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.341053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.341063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.341356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.341367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.341690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.341701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.342077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.342087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.342379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.342390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.342769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.342779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.343154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.343165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.343544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.343554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.343927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.343938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.344318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.344329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.344704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.344715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.345088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.345099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.345481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.345492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.345870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.345882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.346256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.346267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.346670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.346681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.347080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.347090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.371 qpair failed and we were unable to recover it. 00:30:21.371 [2024-07-12 19:26:27.347344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.371 [2024-07-12 19:26:27.347355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.347731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.347741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.348071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.348083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.348461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.348472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.348852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.348863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.349241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.349253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.349641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.349651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.349921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.349931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.350233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.350244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.350621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.350631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.351008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.351018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.351416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.351426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.351805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.351816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.352132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.352144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.352434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.352444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.352733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.352745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.352987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.352997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.353372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.353383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.353779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.353789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.354164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.354175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.354559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.354570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.354925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.354935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.355396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.355407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.355802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.355812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.356189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.356200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.356628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.356640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.356954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.356965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.357334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.357345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.357719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.357729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.358108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.372 [2024-07-12 19:26:27.358118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.372 qpair failed and we were unable to recover it. 00:30:21.372 [2024-07-12 19:26:27.358558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.358569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.358946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.358956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.359448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.359486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.359706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.359719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1aae220 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.359901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abbf20 is same with the state(5) to be set 00:30:21.373 [2024-07-12 19:26:27.360666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.360754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.361416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.361503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.361887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.361923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.362386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.362474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.362955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.362989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.363503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.363591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.363988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.364023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.364294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.364327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.364781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.364810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.365208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.365239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.365649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.365678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.366082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.366121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.366550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.366581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.367005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.367034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.367308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.367338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.367765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.367794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.368225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.368255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.368684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.368712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.373 [2024-07-12 19:26:27.369021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.373 [2024-07-12 19:26:27.369054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.373 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.369470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.369500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.369909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.369937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.370372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.370402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.370823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.370851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.371359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.371388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.371777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.371805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.372224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.372253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.372570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.372598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.373026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.373054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.373494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.373524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.373934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.373962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.374381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.374410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.374822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.374850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.375255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.375284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.375708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.375736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.376166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.376195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.376608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.376636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.377050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.377080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.377443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.377472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.377926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.377955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.378279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.378308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.378742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.378770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.379191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.379220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.379649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.379677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.379987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.380014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.380454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.380483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.380906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.380934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.381342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.381371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.381778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.381806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.382218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.382248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.382688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.382716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.383137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.383166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.383616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.383650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.374 [2024-07-12 19:26:27.384063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.374 [2024-07-12 19:26:27.384090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.374 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.384545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.384574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.384965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.384992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.385416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.385445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.385855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.385883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.386217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.386246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.386633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.386661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.387068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.387096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.387507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.387536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.387953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.387980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.388288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.388321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.388619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.388648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.389071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.389100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.389541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.389570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.390027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.390055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.390482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.390511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.390927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.390954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.391172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.391203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.391601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.391630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.391931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.391959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.392264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.392293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.392750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.392778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.393070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.393101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.393537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.393567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.393962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.393990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.394415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.394445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.394886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.394914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.395328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.395357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.395770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.395798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.396222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.396250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.396685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.396713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.397005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.397034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.397352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.397380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.397812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.375 [2024-07-12 19:26:27.397840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.375 qpair failed and we were unable to recover it. 00:30:21.375 [2024-07-12 19:26:27.398267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.398296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.398590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.398618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.398984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.399011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.399423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.399452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.399844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.399873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.400289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.400323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.400772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.400801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.401104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.401162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.401643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.401671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.401963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.401991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.402450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.402481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.402782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.402810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.403114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.403154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.403545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.403573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.404044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.404072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.404492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.404520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.404912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.404939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.405308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.405338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.405735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.405763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.406172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.406202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.406611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.406638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.407050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.407078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.407518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.407547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.407988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.408016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.408371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.408400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.408844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.408871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.409168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.409197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.409705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.409733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.410113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.410151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.410567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.410595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.410890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.410918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.411246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.411276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.411716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.411744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.412149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.412177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.376 qpair failed and we were unable to recover it. 00:30:21.376 [2024-07-12 19:26:27.412585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.376 [2024-07-12 19:26:27.412612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.412931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.412959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.413385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.413414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.413813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.413842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.414287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.414316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.414715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.414743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.415105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.415142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.415430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.415458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.415902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.415930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.416304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.416333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.416761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.416789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.417199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.417234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.417615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.417643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.417984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.418011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.418421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.418450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.418761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.418789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.419103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.419145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.419571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.419600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.420041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.420069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.420444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.420474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.420884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.420913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.421364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.421393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.421790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.421817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.422245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.422274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.422710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.422738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.423157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.423187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.423577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.423605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.424021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.424049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.424284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.424313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.424745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.424772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.377 [2024-07-12 19:26:27.425227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.377 [2024-07-12 19:26:27.425256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.377 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.425575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.425604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.426012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.426041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.426415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.426444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.426880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.426908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.427217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.427246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.427650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.427678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.427989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.428017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.428442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.428471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.428904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.428932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.429342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.429371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.429781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.429809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.430241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.430270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.430708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.430736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.431162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.431190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.431497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.431527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.431989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.432018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.432483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.432512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.432928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.432957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.433361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.433391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.433805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.433833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.434134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.434163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.434586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.434614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.434922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.434950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.435354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.435383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.435807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.435836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.436189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.436218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.436657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.436684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.436987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.437015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.437333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.437362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.437670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.437699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.438088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.438118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.438563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.438592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.438770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.378 [2024-07-12 19:26:27.438796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.378 qpair failed and we were unable to recover it. 00:30:21.378 [2024-07-12 19:26:27.439202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.439231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.439652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.439681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.440093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.440128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.440412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.440440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.440847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.440875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.441286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.441316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.441743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.441771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.442185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.442214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.442614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.442643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.442953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.442984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.443391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.443420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.443793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.443821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.444306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.444334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.444649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.444681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.445097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.445141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.445564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.445593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.446005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.446032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.446495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.446524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.446939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.446968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.447221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.447249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.447480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.447507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.447931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.447959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.448181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.448212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.448651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.379 [2024-07-12 19:26:27.448679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.379 qpair failed and we were unable to recover it. 00:30:21.379 [2024-07-12 19:26:27.448944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.448970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.449335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.449364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.449766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.449794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.450186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.450216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.450581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.450609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.451006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.451034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.451348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.451377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.451653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.451680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.452022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.452050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.452517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.452545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.452951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.452979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.453400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.453428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.453853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.453882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.454273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.454302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.454702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.454731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.455150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.455194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.455495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.455526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.455992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.456021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.456476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.456506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.456903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.456931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.457242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.457272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.457726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.457755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.458064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.458091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.458585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.458615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.459005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.459034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.459497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.459526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.459842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.459870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.460309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.460339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.460740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.460767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.461138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.461166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.380 [2024-07-12 19:26:27.461476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.380 [2024-07-12 19:26:27.461511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.380 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.461970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.461999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.462198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.462227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.462674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.462701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.462998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.463027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.463582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.463614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.464021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.464048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.464487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.464516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.464823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.464852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.465272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.465301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.465752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.465780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.466082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.466110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.466622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.466651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.466939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.466966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.467410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.467439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.467644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.467672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.381 [2024-07-12 19:26:27.468155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.381 [2024-07-12 19:26:27.468184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.381 qpair failed and we were unable to recover it. 00:30:21.703 [2024-07-12 19:26:27.468574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.468601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.468901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.468929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.469349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.469379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.469825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.469855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.470222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.470252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.470586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.470614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.470940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.470968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.471335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.471366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.471779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.471808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.472107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.472142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.472615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.472643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.473078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.473106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.473665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.473695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.474091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.474120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.475801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.475856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.476258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.476291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.476592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.476628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.476968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.476997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.477434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.477465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.477857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.477887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.478312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.478342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.480306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.480362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.480682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.480719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.482387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.482445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.482880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.482911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.483334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.483365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.483791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.483819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.484247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.484276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.484573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.484602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.485034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.485061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.485556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.485585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.486013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.486041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.486410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.486440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.486855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.486884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.487203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.487238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.487646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.487674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.488099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.488134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.488543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.488573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.488982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.489010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.489422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.489451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.489743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.489777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.490184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.490213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.490587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.704 [2024-07-12 19:26:27.490615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.704 qpair failed and we were unable to recover it. 00:30:21.704 [2024-07-12 19:26:27.490956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.490985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.492434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.492490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.492846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.492878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.493901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.493941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.494385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.494415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.494821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.494852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.495254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.495287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.495669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.495701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.496113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.496154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.496566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.496594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.497004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.497033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.497458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.497488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.497905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.497933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.498362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.498392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.498725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.498753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.499065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.499098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.499438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.499468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.499893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.499922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.500347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.500376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.502045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.502097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.502569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.502608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.503021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.503050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.503411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.503441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.503723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.503755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.504169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.504201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.504668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.504696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.505118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.505158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.505612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.505640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.506049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.506076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.506415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.506444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.506804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.506833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.507120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.507158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.507606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.507634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.508059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.508087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.508540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.508571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.508999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.509027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.509425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.509456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.509780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.509808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.510139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.510168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.510607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.510636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.511046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.511073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.511490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.511520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.511930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.511958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.705 qpair failed and we were unable to recover it. 00:30:21.705 [2024-07-12 19:26:27.512367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.705 [2024-07-12 19:26:27.512397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.512815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.512843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.513400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.513489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.513959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.513997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.514439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.514471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.514889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.514918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.515201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.515239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.515654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.515684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.516075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.516106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.516516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.516546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.516947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.516979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.517276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.517309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.517727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.517757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.518215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.518246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.518663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.518692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.519181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.519209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.519647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.519677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.520111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.520219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.520555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.520584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.520889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.520917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.521303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.521334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.521748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.521776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.522240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.522268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.522700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.522728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.523000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.523029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.523418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.523447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.523883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.523911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.524325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.524354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.524778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.524806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.525268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.525298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.525695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.525724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.526184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.526212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.526684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.526712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.527014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.527042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.527543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.527572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.527959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.527988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.528418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.528448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.528866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.528896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.529187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.529220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.529619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.529648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.530085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.530114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.530417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.530449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.706 [2024-07-12 19:26:27.530864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.706 [2024-07-12 19:26:27.530892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.706 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.531192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.531219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.531579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.531607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.532018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.532047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.532456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.532485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.532911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.532939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.533229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.533261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.533699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.533727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.534086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.534114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.534525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.534553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.534953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.534981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.535467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.535496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.535950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.535978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.536428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.536457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.536829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.536857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.537173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.537214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.537585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.537613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.538050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.538078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.538526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.538556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.538958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.538985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.539267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.539297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.539707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.539735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.540133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.540161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.540509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.540537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.540963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.540991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.541418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.541448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.541867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.541896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.542313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.542343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.542637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.542665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.542963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.542994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.543378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.543407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.543827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.543856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.544329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.544360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.544776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.544805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.545186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.545216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.545628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.545657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.546070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.546098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.546518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.546547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.546967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.546995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.547311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.547340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.707 [2024-07-12 19:26:27.547654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.707 [2024-07-12 19:26:27.547682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.707 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.547967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.547995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.548422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.548451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.548884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.548912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.549224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.549254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.549700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.549728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.550145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.550174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.550495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.550523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.550931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.550959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.551369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.551398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.551766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.551794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.552257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.552286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.552713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.552743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.553134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.553165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.553593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.553621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.554004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.554037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.554433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.554463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.554884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.554912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.555314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.555343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.555749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.555777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.556203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.556232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.556521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.556553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.556977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.557005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.557227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.557260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.557594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.557624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.558047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.558075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.558487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.558517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.558940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.558968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.559315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.559345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.559784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.559813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.560097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.560133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.560534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.560562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.560825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.560852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.561261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.561291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.561732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.561760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.562099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.708 [2024-07-12 19:26:27.562137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.708 qpair failed and we were unable to recover it. 00:30:21.708 [2024-07-12 19:26:27.562475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.562506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.562882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.562910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.563297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.563327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.563751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.563779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.564230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.564258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.564750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.564777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.564938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.564966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.565251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.565283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.565673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.565702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.566096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.566131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.566543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.566571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.566979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.567008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.567443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.567473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.567728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.567759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.568080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.568109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.568513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.568541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.568975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.569003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.569431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.569460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.569882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.569910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.570319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.570354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.570757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.570785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.571096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.571132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.571532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.571561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.571984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.572011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.572403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.572432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.572866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.572894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.573233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.573263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.573701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.573729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.574155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.574184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.574598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.574626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.575065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.575094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.575517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.575546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.575956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.575984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.576302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.576332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.576777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.576806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.577205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.577234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.577537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.577565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.577848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.577874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.578338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.578367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.578774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.578802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.579232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.579261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.579695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.709 [2024-07-12 19:26:27.579722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.709 qpair failed and we were unable to recover it. 00:30:21.709 [2024-07-12 19:26:27.580119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.580158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.580565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.580594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.581055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.581083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.581449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.581478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.581901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.581930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.582331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.582362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.582764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.582793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.583193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.583222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.583633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.583661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.584106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.584143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.584552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.584580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.584945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.584973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.585381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.585410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.585837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.585866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.586265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.586295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.586643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.586672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.587085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.587113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.587530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.587564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.587945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.587973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.588394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.588422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.588817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.588845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.589295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.589324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.589745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.589772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.590187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.590217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.590653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.590680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.591079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.591108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.591513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.591542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.591966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.591994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.592403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.592431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.592846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.592874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.593314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.593343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.593626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.593658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.594087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.594116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.594609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.594638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.595038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.595067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.595472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.595502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.595965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.595994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.596393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.596422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.596884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.596912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.597320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.597350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.597641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.597672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.598074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.598102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.598495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.710 [2024-07-12 19:26:27.598525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.710 qpair failed and we were unable to recover it. 00:30:21.710 [2024-07-12 19:26:27.598962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.598991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.599276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.599309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.599721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.599749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.600151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.600181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.600645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.600674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.600951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.600983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.601273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.601301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.601713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.601741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.602047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.602076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.602524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.602554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.603036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.603065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.603484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.603513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.603949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.603977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.604313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.604342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.604815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.604850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.605255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.605283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.605766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.605794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.606219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.606250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.606748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.606776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.607080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.607109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.607478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.607507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.607926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.607955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.608374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.608403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.608799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.608826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.609255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.609284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.609720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.609748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.610065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.610096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.610461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.610490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.610944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.610974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.611379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.611408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.611839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.611868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.612277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.612306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.612743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.612771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.613187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.613216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.613526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.613555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.613970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.613998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.614379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.614409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.614824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.614852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.615252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.615283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.615568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.615599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.616022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.711 [2024-07-12 19:26:27.616051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.711 qpair failed and we were unable to recover it. 00:30:21.711 [2024-07-12 19:26:27.616501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.616531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.616956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.616985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.617398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.617426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.617850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.617878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.618254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.618283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.618625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.618653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.619061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.619089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.619616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.619646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.620067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.620095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.620512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.620542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.621069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.621097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.621512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.621541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.621965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.621994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.622418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.622454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.622861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.622889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.623256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.623286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.623727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.623754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.624178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.624207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.624488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.624516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.624829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.624856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.625275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.625304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.625748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.625776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.626201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.626230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.626668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.626696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.627075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.627104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.627436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.627463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.627878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.627906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.628329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.628358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.628778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.628806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.629086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.629118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.629576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.629604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.630066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.630094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.630596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.630627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.631040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.631068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.631519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.631548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.631948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.631977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.712 [2024-07-12 19:26:27.632348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.712 [2024-07-12 19:26:27.632378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.712 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.632806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.632835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.633140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.633168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.633476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.633504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.633934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.633963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.634384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.634413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.634845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.634874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.635293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.635322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.635715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.635743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.636061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.636089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.636519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.636548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.636958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.636986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.637363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.637392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.637878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.637907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.638335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.638365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.638715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.638743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.639140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.639169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.639632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.639667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.640094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.640131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.640542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.640571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.640982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.641010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.641456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.641485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.641799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.641828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.642156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.642185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.642629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.642657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.643085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.643113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.643519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.643548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.643871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.643902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.644310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.644340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.644745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.644773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.645178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.645208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.645543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.645572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.645980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.646008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.646430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.646459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.646888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.646915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.647327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.647356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.647805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.647834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.648207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.648236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.648671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.648700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.649119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.649158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.649623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.649652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.713 [2024-07-12 19:26:27.650059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.713 [2024-07-12 19:26:27.650087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.713 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.650499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.650529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.650934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.650963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.651369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.651399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.651821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.651848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.652281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.652311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.652608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.652637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.653043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.653071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.653503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.653533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.653933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.653961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.654363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.654392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.654823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.654851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.655270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.655300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.655761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.655789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.656219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.656249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.656684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.656712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.657151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.657185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.657606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.657634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.657978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.658005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.658305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.658335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.658786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.658814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.659199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.659228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.659530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.659558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.659963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.659992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.660307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.660336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.660819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.660847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.661316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.661345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.661783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.661810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.662242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.662271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.662682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.662709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.663141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.663171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.663612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.663640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.664064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.664092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.664529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.664558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.664981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.665010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.665429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.665459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.665873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.665901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.666237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.666266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.666659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.666688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.666981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.667010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.667447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.667476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.667788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.667817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.668128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.668157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.668625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.668663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.714 qpair failed and we were unable to recover it. 00:30:21.714 [2024-07-12 19:26:27.669135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.714 [2024-07-12 19:26:27.669164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.669501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.669529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.669919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.669947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.670261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.670292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.670711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.670740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.671141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.671171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.671592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.671622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.672040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.672068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.672517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.672546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.672980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.673008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.673378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.673406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.673822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.673850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.674271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.674300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.674731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.674759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.675210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.675238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.675602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.675630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.675937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.675968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.676462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.676491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.676932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.676961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.677386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.677415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.677833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.677862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.678310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.678341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.678660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.678688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.679141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.679170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.679628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.679656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.680075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.680104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.680525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.680554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.680970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.680999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.681436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.681465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.681905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.681933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.682231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.682263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.682622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.682650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.682909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.682936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.683433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.683528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.684014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.684050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.684411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.684443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.684838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.684866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.685275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.685306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.685662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.685691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.686115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.686183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.686612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.686641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.687080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.687110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.687420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.687451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.687760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.687789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.715 [2024-07-12 19:26:27.688186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.715 [2024-07-12 19:26:27.688216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.715 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.688653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.688682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.689014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.689042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.689546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.689575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.689967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.689996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.690324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.690358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.690791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.690820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.691246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.691276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.691711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.691740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.692101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.692140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.692582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.692611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.692928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.692957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.693413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.693444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.693846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.693876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.694231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.694261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.694668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.694696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.695133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.695164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.695624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.695653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.696067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.696095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.696564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.696594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.697030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.697060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.697458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.697488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.697906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.697934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.698451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.698544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.698951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.698992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.699458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.699490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.699915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.699945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.700321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.700350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.700804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.700833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.701211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.701241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.701569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.701602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.702021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.702050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.702455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.702484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.702915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.702943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.703322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.703352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.703696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.703736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.704154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.704185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.704625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.704654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.716 qpair failed and we were unable to recover it. 00:30:21.716 [2024-07-12 19:26:27.705067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.716 [2024-07-12 19:26:27.705094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.705524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.705554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.705980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.706008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.706439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.706470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.706804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.706832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.707234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.707263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.707723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.707751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.708049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.708081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.708444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.708473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.708941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.708970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.709381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.709410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.709810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.709840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.710255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.710284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.710695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.710723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.711152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.711182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.711606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.711634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.712077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.712106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.712524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.712553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.712967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.712996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.713433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.713462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.713897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.713924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.714289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.714319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.714623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.714652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.715024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.715052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.715454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.715484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.715835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.715863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.716269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.716298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.716625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.716657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.717059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.717087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.717528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.717557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.717946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.717974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.718275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.718305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.718702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.718731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.719139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.719169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.719618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.719647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.720072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.720101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.720582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.720612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.721047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.721082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.721501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.721532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.721965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.721995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.722324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.722358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.722761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.722790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.723129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.723159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.723369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.723398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.717 qpair failed and we were unable to recover it. 00:30:21.717 [2024-07-12 19:26:27.723824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.717 [2024-07-12 19:26:27.723853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.724280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.724309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.724685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.724714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.725134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.725163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.725592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.725622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.725975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.726004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.726462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.726493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.726891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.726920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.727387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.727417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.727851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.727879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.728246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.728275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.728718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.728747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.729144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.729173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.729590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.729619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.729973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.730003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.730385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.730415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.730834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.730863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.731162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.731195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.731567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.731596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.732065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.732094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.732593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.732626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.732933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.732963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.733381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.733412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.733863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.733893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.734208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.734240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.734707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.734735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.735168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.735198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.735623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.735652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.736065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.736094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.736424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.736453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.736892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.736921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.737239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.737268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.737718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.737746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.738071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.738108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.738538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.738568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.738912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.738942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.739278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.739309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.739625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.739656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.740089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.740117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.740632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.740661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.741067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.741096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.741526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.741556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.741981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.742010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.742310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.718 [2024-07-12 19:26:27.742337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.718 qpair failed and we were unable to recover it. 00:30:21.718 [2024-07-12 19:26:27.742723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.742752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.743187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.743216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.743629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.743657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.743953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.743983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.744439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.744469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.744822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.744850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.745159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.745191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.745633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.745662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.746117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.746153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.746465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.746494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.746806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.746838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.747204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.747233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.747574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.747604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.748030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.748059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.748527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.748557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.748865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.748895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.749313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.749343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.749761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.749789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.750195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.750224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.750673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.750702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.751098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.751136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.751540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.751570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.751994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.752023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.752344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.752377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.752809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.752838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.753272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.753302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.753724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.753752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.754189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.754219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.754648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.754677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.755098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.755145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.755573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.755603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.756028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.756057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.756407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.756438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.756864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.756894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.757314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.757343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.757836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.757865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.758068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.758096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.758567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.758597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.759003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.759032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.759356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.759386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.759823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.759852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.760267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.760297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.760715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.760744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.761180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.719 [2024-07-12 19:26:27.761209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.719 qpair failed and we were unable to recover it. 00:30:21.719 [2024-07-12 19:26:27.761654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.761683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.762074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.762104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.762546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.762576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.762993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.763022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.763432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.763462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.763890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.763920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.764335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.764366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.764750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.764779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.765206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.765235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.765661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.765690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.766103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.766139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.766581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.766610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.767030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.767058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.767469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.767500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.767929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.767959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.768382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.768414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.768873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.768903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.769337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.769432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.769905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.769942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.770352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.770384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.770813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.770842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.771268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.771298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.771766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.771795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.772237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.772268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.772579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.772609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.773026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.773065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.773448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.773480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.773815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.773852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.774280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.774310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.774764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.774793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.775192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.775221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.775698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.775728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.776164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.776194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.776595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.776625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.777045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.777075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.777505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.777536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.777905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.777934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.778337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.778369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.720 qpair failed and we were unable to recover it. 00:30:21.720 [2024-07-12 19:26:27.778786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.720 [2024-07-12 19:26:27.778815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.779248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.779278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.779693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.779721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.780000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.780031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.780456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.780487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.780917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.780946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.781357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.781387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.781863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.781892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.782301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.782331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.782637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.782671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.783096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.783134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.783583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.783612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.784081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.784110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.784504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.784533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.784959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.784989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.785506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.785537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.785927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.785957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.786281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.786312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.786743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.786772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.787137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.787166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.787621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.787649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.788063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.788092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.788597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.788628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.789074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.789102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.789558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.789589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.789895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.789925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.790466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.790562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.791094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.791160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.791669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.791700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.791921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.791949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.792464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.792559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.792974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.793012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.793404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.793436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.793884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.793914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.794264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.794303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.794715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.794744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.795144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.795175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.795497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.795526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.795965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.795994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.796432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.796461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.796851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.796880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.797295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.797326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.797762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.797791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.798218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.721 [2024-07-12 19:26:27.798249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.721 qpair failed and we were unable to recover it. 00:30:21.721 [2024-07-12 19:26:27.798678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.798708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.799157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.799186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.799563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.799592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.799989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.800019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.800327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.800356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.800785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.800814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.801238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.801267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.801701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.801730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.802154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.802185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.802567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.802598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.803048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.803079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.803466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.803498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.803843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.803871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.804308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.804338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.804759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.804787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.805108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.805165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.805627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.805656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.806092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.806132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.806603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.806634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.806949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.806979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.807227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.807257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.807718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.807749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.808066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.808095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.808554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.808591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.808980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.809010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.809322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.809350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.809782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.809811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.810247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.810277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.810724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.810754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.811162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.811194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.811620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.811649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.812051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.812079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.812451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.812481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.812894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.812924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.813345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.813374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.813801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.813830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.722 [2024-07-12 19:26:27.814267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.722 [2024-07-12 19:26:27.814298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.722 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.814792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.814823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.815335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.815368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.815835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.815864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.816282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.816313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.816744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.816773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.817189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.817219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.817712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.817741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.818054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.818084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.818498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.818527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.818844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.818873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.819326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.819355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.819837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.819866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.820193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.820228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.820589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.820619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.821056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.821085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.821514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.821544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.821983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.822012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.822348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.822378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.822803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.822832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.823279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.823309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.823749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.823779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.824087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.824117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.824566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.824595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.825073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.825101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.825444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.825479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.825890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.825919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.826341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.826377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.826854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.826884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.827322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.827352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.827713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.827743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.828128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.828159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.828608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.828637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.829079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.829108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.829622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.829651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.830081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.830110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.996 [2024-07-12 19:26:27.830594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.996 [2024-07-12 19:26:27.830623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.996 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.830941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.830970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.831296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.831325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.831791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.831820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.832239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.832269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.832699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.832729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.833162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.833193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.833643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.833672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.834095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.834142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.834588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.834616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.835046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.835075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.835519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.835550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.835989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.836017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.836434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.836463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.836894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.836925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.837239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.837269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.837595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.837624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.838055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.838084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.838399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.838435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.838869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.838898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.839249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.839280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.839749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.839780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.840178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.840210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.840666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.840695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.841079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.841108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.841592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.841622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.842049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.842079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.842518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.842549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.842981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.843012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.843445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.843477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.843909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.843940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.844363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.844405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.844824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.844853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.845253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.845283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.845617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.845647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.846088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.846118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.846441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.846473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.846913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.846943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.847377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.847408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.847841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.847870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.848286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.997 [2024-07-12 19:26:27.848316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.997 qpair failed and we were unable to recover it. 00:30:21.997 [2024-07-12 19:26:27.848744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.848772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.849280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.849310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.849729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.849758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.850231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.850262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.850711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.850741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.851120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.851160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.851572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.851602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.852046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.852075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.852505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.852535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.853016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.853045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.853359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.853393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.853808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.853837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.854252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.854282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.854738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.854767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.855217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.855248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.855645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.855674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.856132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.856163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.856629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.856659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.857083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.857113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.857556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.857587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.857909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.857939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.858369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.858401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.858800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.858829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.859289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.859319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.859748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.859777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.860212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.860242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.860704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.860734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.861212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.861242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.861656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.861685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.862140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.862171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.862620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.862655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.863096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.863140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.863549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.863578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.864011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.864039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.864490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.864520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.864961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.864991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.865425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.865455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.865898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.865927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.866363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.866394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.866827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.866857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.998 [2024-07-12 19:26:27.867290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.998 [2024-07-12 19:26:27.867320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.998 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.867650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.867679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.868159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.868191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.868660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.868689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.869131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.869162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.869624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.869654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.870073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.870102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.870571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.870602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.871032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.871062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.871534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.871566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.871996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.872027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.872325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.872355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.872722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.872752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.873203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.873233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.873695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.873723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.874150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.874179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.874634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.874663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.874978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.875013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.875450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.875480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.875924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.875955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.876403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.876432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.876868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.876898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.877335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.877365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.877692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.877722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.878117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.878155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.878403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.878434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.878770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.878800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.879234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.879264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.879666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.879695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.880113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.880153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.880548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.880585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.881026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.881055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.881552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.881582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.881906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.881936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.882334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.882364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.882674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.882704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.883160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.883191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.883642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.883677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.884102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.884145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.884573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.884604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.885019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.885050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:21.999 qpair failed and we were unable to recover it. 00:30:21.999 [2024-07-12 19:26:27.885488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.999 [2024-07-12 19:26:27.885520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.885824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.885857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.886270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.886301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.886758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.886788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.887226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.887256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.887665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.887694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.888108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.888145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.888583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.888612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.889020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.889049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.889514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.889544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.889977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.890007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.890322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.890353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.890680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.890710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.891149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.891179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.891642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.891672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.891976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.892007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.892388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.892424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.892836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.892865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.893162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.893194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.893644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.893673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.894120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.894158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.894380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.894409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.894843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.894873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.895203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.895234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.895653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.895683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.896080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.896109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.896451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.896481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.896978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.897007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.897447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.897478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.897955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.897985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.898408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.898440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.898855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.898885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.899291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.899322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.000 [2024-07-12 19:26:27.899760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.000 [2024-07-12 19:26:27.899790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.000 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.900232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.900263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.900707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.900736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.901131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.901163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.901598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.901629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.902068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.902098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.902588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.902619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.903057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.903087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.903439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.903469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.903853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.903883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.904290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.904321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.904741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.904771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.905188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.905218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.905643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.905673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.906075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.906107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.906366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.906400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1619806 Killed "${NVMF_APP[@]}" "$@" 00:30:22.001 [2024-07-12 19:26:27.906819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.906851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.907271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.907302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:22.001 [2024-07-12 19:26:27.907749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.907780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:22.001 [2024-07-12 19:26:27.908212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.908248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:22.001 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:22.001 [2024-07-12 19:26:27.908713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.908744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.001 [2024-07-12 19:26:27.909211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.909244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.909720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.909750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.910191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.910225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.910568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.910598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.911046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.911076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.911504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.911535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.911954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.911983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.912415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.912445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.912926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.912958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.913374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.913407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.913926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.913958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.914435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.914468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.914929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.914960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.915393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.915425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.915866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.915896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 [2024-07-12 19:26:27.916331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.916363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1620758 00:30:22.001 [2024-07-12 19:26:27.916689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.001 [2024-07-12 19:26:27.916723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.001 qpair failed and we were unable to recover it. 00:30:22.001 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1620758 00:30:22.001 [2024-07-12 19:26:27.917142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.917176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1620758 ']' 00:30:22.002 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:22.002 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:22.002 [2024-07-12 19:26:27.917627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.917658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:22.002 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:22.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:22.002 [2024-07-12 19:26:27.918077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.918109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:22.002 19:26:27 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.002 [2024-07-12 19:26:27.918543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.918577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.919069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.919100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.919595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.919627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.919945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.919976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.920330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.920363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.920673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.920705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.920947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.920986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.921412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.921449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.921775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.921809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.922206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.922238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.922683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.922713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.923040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.923073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.923509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.923541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.923931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.923962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.924426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.924458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.924890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.924928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.925301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.925333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.925787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.925818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.926259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.926292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.926607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.926637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.926950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.926982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.927409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.927441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.927864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.927896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.928378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.928409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.928663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.928694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.929189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.929219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.929678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.929708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.930036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.930067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.930560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.930591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.931038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.931070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.931429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.931460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.931893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.931922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.932304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.932337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.932789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.002 [2024-07-12 19:26:27.932819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.002 qpair failed and we were unable to recover it. 00:30:22.002 [2024-07-12 19:26:27.933233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.933263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.933728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.933757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.934079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.934109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.934661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.934692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.935097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.935142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.935497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.935528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.935945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.935979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.936487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.936519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.936943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.936976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.937301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.937332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.937502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.937533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.937971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.938001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.938492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.938524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.938972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.939004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.939521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.939553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.939989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.940019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.940358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.940389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.940807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.940839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.941274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.941304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.941725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.941754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.942084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.942114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.942609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.942647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.943049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.943079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.943541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.943572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.943951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.943980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.944435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.944468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.944903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.944935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.945350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.945380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.945830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.945859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.946314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.946345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.946803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.946832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.947341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.947372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.947832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.947861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.948198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.948230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.948672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.948705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.949163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.949194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.949637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.949667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.950109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.950149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.003 qpair failed and we were unable to recover it. 00:30:22.003 [2024-07-12 19:26:27.950560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.003 [2024-07-12 19:26:27.950589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.950881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.950915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.951352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.951383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.951836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.951865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.952290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.952322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.952764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.952795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.953102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.953145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.953594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.953625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.954059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.954089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.954578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.954608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.955045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.955075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.955520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.955550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.955994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.956023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.956556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.956589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.956992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.957022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.957477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.957507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.957881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.957911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.958337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.958367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.958700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.958731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.959155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.959185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.959605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.959637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.960081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.960111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.960546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.960577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.961045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.961082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.961636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.961666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.962101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.962143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.962608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.962638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.963095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.963134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.963641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.963671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.963976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.964012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.964378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.964408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.964848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.964878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.965276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.965306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.965728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.965757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.004 qpair failed and we were unable to recover it. 00:30:22.004 [2024-07-12 19:26:27.966197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.004 [2024-07-12 19:26:27.966228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.966709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.966739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.967187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.967218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.967653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.967682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.968138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.968171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.968500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.968533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.968977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.969007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.969435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.969465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.969890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.969921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.970331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.970362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.970744] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:30:22.005 [2024-07-12 19:26:27.970813] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:22.005 [2024-07-12 19:26:27.970812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.970845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.971280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.971310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.971740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.971769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.972209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.972240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.972665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.972696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.973062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.973093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.973594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.973627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.974063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.974096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.974614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.974646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.975056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.975087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.975528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.975562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.975986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.976017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.976448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.976480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.976918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.976949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.977329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.977360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.977806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.977838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.978261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.978293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.978722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.978752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.979073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.979104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.979610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.979641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.980067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.980100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.980566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.980599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.981048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.981078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.981580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.981614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.982017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.982048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.982474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.982506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.982937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.982967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.983394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.983426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.983805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.983835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.984192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.005 [2024-07-12 19:26:27.984225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.005 qpair failed and we were unable to recover it. 00:30:22.005 [2024-07-12 19:26:27.984697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.984728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.985129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.985169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.985610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.985640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.985989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.986020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.986378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.986411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.986855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.986885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.987288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.987320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.987796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.987825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.988257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.988288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.988728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.988759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.989211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.989241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.989686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.989717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.990038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.990068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.990524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.990556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.990871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.990903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.991325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.991357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.991796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.991827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.992252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.992282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.992691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.992721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.993093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.993130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.993466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.993495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.993937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.993967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.994295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.994329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.994769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.994798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.995248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.995279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.995794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.995823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.996263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.996294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.996749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.996780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.997067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.997107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.997633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.997664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.998156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.998187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.998637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.998667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.998973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.999009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.999501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.999532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:27.999951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:27.999982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:28.000413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:28.000445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:28.000773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:28.000805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:28.001258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:28.001289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:28.001711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:28.001742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:28.002191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:28.002223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.006 qpair failed and we were unable to recover it. 00:30:22.006 [2024-07-12 19:26:28.002682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.006 [2024-07-12 19:26:28.002713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.003034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.003064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.003506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.003538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.003985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.004016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.004478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.004509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.004923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.004953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.005294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.005326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.005759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.005790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.006202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.006234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.006663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.006693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.007114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.007172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.007605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.007634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.007959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.007993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.007 [2024-07-12 19:26:28.008361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.008393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.008852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.008881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.009311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.009344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.009793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.009822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.010273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.010303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.010808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.010837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.011269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.011300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.011635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.011669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.012102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.012144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.012598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.012630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.013072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.013103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.013561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.013592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.013957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.013987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.014414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.014444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.014887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.014917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.015229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.015261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.015721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.015753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.016198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.016229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.016632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.016663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.017057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.017087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.017485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.017516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.020019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.020095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.020470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.020512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.020966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.020999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.021380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.021411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.021841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.021871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.022317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.022349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.007 [2024-07-12 19:26:28.022774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.007 [2024-07-12 19:26:28.022805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.007 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.023118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.023173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.023652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.023683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.024115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.024154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.024621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.024650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.025016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.025046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.025388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.025420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.025861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.025891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.026333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.026366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.026790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.026821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.027256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.027288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.027743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.027775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.028198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.028229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.028688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.028718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.029139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.029170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.029609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.029640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.030080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.030112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.030600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.030633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.030910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.030940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.031351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.031383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.031822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.031852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.032400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.032505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.033055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.033093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.033548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.033583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.033956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.033986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.034399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.034432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.034917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.034949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.035382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.035415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.035854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.035886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.036324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.036358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.036729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.036762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.037162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.037195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.037513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.037544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.038001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.038031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.038527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.038558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.039000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.039029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.039333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.039367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.039790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.039820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.040210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.040242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.040689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.040722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.041165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.041196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.008 qpair failed and we were unable to recover it. 00:30:22.008 [2024-07-12 19:26:28.041630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.008 [2024-07-12 19:26:28.041669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.042111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.042154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.042581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.042612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.042938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.042979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.043433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.043466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.043911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.043942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.044387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.044418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.044853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.044884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.045323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.045354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.045787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.045818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.046256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.046288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.046698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.046729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.047173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.047205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.047414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.047445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.047889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.047919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.048369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.048400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.048709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.048746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.049142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.049175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.049617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.049647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.050072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.050102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.050537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.050569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.051010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.051040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.051474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.051506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.053408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.053477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.053951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.053988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.055803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.055868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.056344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.056383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.056859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.056891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.057329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.057361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.057808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.057838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.058283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.058315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.058629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.058663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.060573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.060634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.061107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.061155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.061604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.061638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.009 [2024-07-12 19:26:28.062074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.009 [2024-07-12 19:26:28.062106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.009 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.062552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.062585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.063008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.063039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.063462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.063494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.063830] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:22.010 [2024-07-12 19:26:28.063937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.063965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.065855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.065918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.066390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.066427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.066883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.066927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.067400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.067450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.067916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.067972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.068432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.068493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.068921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.068974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.069423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.069474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.069907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.069938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.070370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.070403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.070852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.070884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.071326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.071360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.071797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.071828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.072197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.072242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.072602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.072632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.073070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.073100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.073516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.073548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.073985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.074015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.074445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.074476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.074797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.074832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.075249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.075280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.075723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.075753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.076207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.076237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.076707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.076736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.077159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.077190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.077664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.077694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.078110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.078157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.078630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.078661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.079077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.079109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.079627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.079660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.080087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.080118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.080595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.080627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.081069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.081101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.081630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.081665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.081970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.082001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.082443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.010 [2024-07-12 19:26:28.082474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.010 qpair failed and we were unable to recover it. 00:30:22.010 [2024-07-12 19:26:28.082916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.082946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.083386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.083490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.084010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.084049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.084498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.084533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.084980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.085011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.085459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.085491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.085808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.085839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.086291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.086325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.086757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.086788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.087228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.087259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.087709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.087739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.088058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.088095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.088563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.088595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.089064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.089095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.089521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.089553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.089941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.089972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.090432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.090463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.090870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.090900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.091340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.091372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.091729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.091759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.092204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.092234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.092577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.092607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.093054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.093084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.093534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.093564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.093993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.094023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.094531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.094563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.095003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.095033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.095435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.095466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.095881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.095910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.096290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.096320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.096752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.096783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.097093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.097135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.097580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.097609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.097929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.097959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.098387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.098419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.098841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.098873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.099350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.099381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.099739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.099770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.100209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.100239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.100698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.100728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.011 [2024-07-12 19:26:28.101094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.011 [2024-07-12 19:26:28.101134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.011 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.101631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.101663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.102099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.102140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.102563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.102594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.103032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.103069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.103550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.103582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.103900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.103931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.104377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.104482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.104977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.105014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.105470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.105503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.105934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.105966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.106385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.106416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.106860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.106890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.107218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.107250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.107700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.107731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.108048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.108087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.108558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.108591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.108956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.108986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.109336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.109369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.109816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.109846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.110330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.110361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.110749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.110779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.111196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.111227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.111622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.111652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.112081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.112111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.112549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.112578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.113048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.113077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.113435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.113467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.113786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.113814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.114270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.114301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.114732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.114762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.115241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.115272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.012 [2024-07-12 19:26:28.115598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.012 [2024-07-12 19:26:28.115635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.012 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.115987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.116019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.116334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.116367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.116792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.116824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.117241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.117272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.117724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.117753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.118299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.118331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.118783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.118812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.119270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.119301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.119641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.119671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.120087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-07-12 19:26:28.120119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-07-12 19:26:28.120519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.120549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.120970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.121007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.121437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.121468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.121754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.121783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.122235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.122265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.122724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.122754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.123198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.123228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.123691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.123721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.124142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.124175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.124632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.124662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.125054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.125084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.125554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.125584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.125943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.125974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.126324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.126354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.126808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.126838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.127279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.127311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.127739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.127769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.128208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.128238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.128689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.128719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.129159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.129192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.129622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.129653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.130062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.130092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.130506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.130536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.130972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.131001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.131451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.131483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.131930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.131960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.132373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.132405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.132838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.132867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.133280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.133313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.133570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.133598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.134055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.134085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-07-12 19:26:28.134607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-07-12 19:26:28.134637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.135083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.135113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.135583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.135614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.136048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.136077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.136513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.136544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.136998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.137028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.137330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.137364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.137829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.137862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.138298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.138329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.138761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.138792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.139287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.139325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.139757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.139786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.140221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.140252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.140732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.140762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.141203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.141234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.141660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.141689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.142138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.142170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.142626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.142656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.143022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.143051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.143455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.143486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.143984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.144013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.144302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.144332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.144763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.144792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.145070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.145098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.145532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.145564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.145980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.146010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.146416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.146448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.146902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.146931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.147291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.147322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.147758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.147788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.148220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.148250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.148705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-07-12 19:26:28.148736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-07-12 19:26:28.149177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.149208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.149650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.149680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.150120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.150161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.150654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.150684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.151188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.151218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.151652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.151684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.152117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.152163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.152604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.152634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.153046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.153077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.153536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.153569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.154003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.154033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.154445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.154476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.154920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.154950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.155395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.155427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.155883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.155913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.156444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.156550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.156925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.156962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.157395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.157428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.157873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.157924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.158350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.158381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.158802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.158832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.159175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.159215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.159677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.159707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.160086] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:22.289 [2024-07-12 19:26:28.160146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:22.289 [2024-07-12 19:26:28.160155] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:22.289 [2024-07-12 19:26:28.160162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:22.289 [2024-07-12 19:26:28.160168] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:22.289 [2024-07-12 19:26:28.160168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.160197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.160356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:22.289 [2024-07-12 19:26:28.160532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:22.289 [2024-07-12 19:26:28.160666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.160695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 [2024-07-12 19:26:28.160698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.160699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:22.289 [2024-07-12 19:26:28.161109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.161150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.161585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.161615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.162058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.162087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-07-12 19:26:28.162410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-07-12 19:26:28.162449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.162742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.162770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.163217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.163251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.163632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.163663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.164069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.164099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.164522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.164553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.164989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.165019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.165453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.165484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.165931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.165961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.166223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.166251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.166697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.166727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.167058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.167087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.167448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.167477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.167850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.167879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.168340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.168374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.168730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.168760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.169201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.169233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.169681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.169711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.170149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.170180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.170623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.170652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.171114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.171155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.171651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.171681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.172147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.172179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.172686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.172716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.173333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.173440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.173967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.174005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.174336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.174370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.174844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.174875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.175273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.175305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.175655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.175686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.176019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-07-12 19:26:28.176050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-07-12 19:26:28.176526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.176557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.177004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.177036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.177479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.177511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.177838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.177879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.178270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.178307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.178728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.178758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.179085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.179115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.179469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.179499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.179932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.179962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.180383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.180423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.180863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.180892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.181229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.181258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.181659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.181688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.182046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.182076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.182503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.182535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.182949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.182979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.183472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.183503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.183944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.183973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.184460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.184492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.184912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.184941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.185380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.185411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.185745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.185775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.186102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.186142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.186431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.186461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.186735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.186763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.187275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.187305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.187741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.187770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.188042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.188070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.188520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.188552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.188991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.189023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.189433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.189464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.189907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.189937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.190215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.190245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-07-12 19:26:28.190690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-07-12 19:26:28.190719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.191169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.191199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.191646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.191675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.191998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.192028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.192288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.192319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.192744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.192775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.193212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.193243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.193703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.193732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.194161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.194191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.194531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.194563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.195003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.195032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.195397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.195428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.195861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.195892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.196212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.196242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.196684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.196714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.197163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.197194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.197650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.197685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.198103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.198142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.198408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.198435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.198829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.198858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.199289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.199319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.199636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.199670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.200137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.200168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.200651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.200682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.201169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.201201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.201454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-07-12 19:26:28.201483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-07-12 19:26:28.201909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.201939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.202143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.202173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.202613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.202643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.206611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.206715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.207105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.207159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.207630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.207661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.208100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.208162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.208552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.208655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.209182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.209223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.209654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.209689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.210004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.210040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.210548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.210650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.211066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.211103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.211565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.211599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.211837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.211865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.212186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.212225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.212692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.212722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.213201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.213235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.213735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.213764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.214213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.214243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.214684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.214715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.215149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.215180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.215659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.215688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.216171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.216201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.216643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.216672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.217120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.217164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.217588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.217618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.218043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.218073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.218243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.218273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.218751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-07-12 19:26:28.218780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-07-12 19:26:28.219217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.219254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.219660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.219690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.220138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.220169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.220605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.220635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.221045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.221075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.221529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.221560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.221988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.222017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.222267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.222296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.222742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.222772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.223179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.223210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.223662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.223690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.224138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.224167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.224613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.224644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.225096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.225134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.225582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.225611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.225919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.225949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.226215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.226245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.226516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.226544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.226995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.227025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.227470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.227500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.227953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.227982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.228426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.228456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.228701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.228729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.229097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.229134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.229561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.229592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.230035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.230063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.230511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.230542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.231059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.231094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.231548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.231578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.231785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.231814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.232224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.232254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.232527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.232556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-07-12 19:26:28.232870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-07-12 19:26:28.232900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.233355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.233385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.233813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.233842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.234295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.234325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.234647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.234677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.235155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.235185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.235643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.235673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.235789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.235817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.236222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.236253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.236708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.236738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.237176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.237207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.237655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.237684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.238131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.238162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.238605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.238634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.239081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.239111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.239427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.239462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.239900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.239931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.240365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.240396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.240844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.240875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.241306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.241337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.241773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.241802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.242241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.242272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.242748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.242778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.243179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.243210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.243618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.243647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.244098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.244137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.244591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.244621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.245063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.245093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.245554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.245585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.246027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.246056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.246521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.246552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.246787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.246815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.247064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.247093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.247525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-07-12 19:26:28.247556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-07-12 19:26:28.247980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.248011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.248432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.248475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.248905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.248935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.249194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.249225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.249604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.249633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.250083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.250112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.250553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.250583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.250841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.250869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.251180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.251211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.251734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.251764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.252051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.252081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.252527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.252561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.252984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.253014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.253420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.253450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.253880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.253909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.254188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.254218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.254649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.254680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.255004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.255033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.255457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.255488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.255870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.255900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.256321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.256351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.256796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.256826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.257226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.257256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.257608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.257643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.258025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.258054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.258504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.258536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.258978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.259007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.259451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.259483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.259720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-07-12 19:26:28.259748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-07-12 19:26:28.260181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.260212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.260673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.260702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.261133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.261163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.261642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.261671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.262109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.262147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.262610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.262641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.262952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.262985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.263399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.263431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.263671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.263701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.264165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.264197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.264644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.264675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.265120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.265166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.265602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.265638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.266065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.266094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.266563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.266594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.267048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.267078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.267499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.267530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.268000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.268030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.268420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.268453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.268894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.268924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.269360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.269391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.269841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.269870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.270111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.270147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.270604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.270633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.271091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.271120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-07-12 19:26:28.271375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-07-12 19:26:28.271405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.271828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.271859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.272322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.272353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.272793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.272823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.273277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.273308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.273572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.273599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.274035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.274065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.274491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.274522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.274781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.274809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.275235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.275265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.275731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.275761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.276207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.276239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.276680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.276711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.277035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.277069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.277528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.277561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.277999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.278030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.278352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.278385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.278831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.278861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.279285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.279316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.279730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.279759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.280055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.280086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.280557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.280588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.280843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.280872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.281180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.281211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.281637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.281667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.281885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.281913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.282188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.282219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.282679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.282714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.283178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.283208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.283449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.283478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.298 [2024-07-12 19:26:28.283780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.298 [2024-07-12 19:26:28.283812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.298 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.284280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.284310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.284788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.284819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.285255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.285287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.285737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.285769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.286235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.286264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.286568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.286599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.287020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.287050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.287504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.287535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.287967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.287996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.288242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.288271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.288728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.288757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.289043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.289073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.289337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.289369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.289810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.289840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.290163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.290198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.290632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.290662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.291108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.291146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.291589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.291618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.291847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.291875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.292326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.292356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.292790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.292820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.293267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.293298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.293749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.293781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.294219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.294251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.294491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.294519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.294851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.294881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.295322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.295353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.295802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.295832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.296075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.296104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.296545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.296575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.299 [2024-07-12 19:26:28.297027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.299 [2024-07-12 19:26:28.297058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.299 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.297507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.297537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.297792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.297821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.298332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.298363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.298812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.298842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.299277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.299307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.299575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.299609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.299856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.299886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.300318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.300348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.300575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.300603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.301046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.301076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.301308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.301344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.301726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.301756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.302193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.302224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.302543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.302573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.302821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.302849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.303301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.303331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.303766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.303796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.304249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.304279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.304729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.304757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.305013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.305042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.305458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.305489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.305933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.305962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.306294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.306325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.306624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.306653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.307109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.307147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.307615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.307644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.308100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.308157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.308616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.308646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.300 [2024-07-12 19:26:28.309093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.300 [2024-07-12 19:26:28.309136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.300 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.309436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.309470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.309754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.309783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.310043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.310072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.310436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.310467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.310784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.310814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.311254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.311287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.311753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.311784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.312232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.312264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.312488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.312518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.312963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.312993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.313449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.313478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.313924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.313954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.314407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.314437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.314876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.314905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.315339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.315369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.315817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.315848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.316298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.316333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.316760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.316790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.317236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.317267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.317729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.317759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.318194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.318224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.318675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.318705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.319162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.319195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.319648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.319678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.320137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.320168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.320423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.320451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.320875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.320905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.321159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.321189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.321654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.321683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.301 qpair failed and we were unable to recover it. 00:30:22.301 [2024-07-12 19:26:28.322134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.301 [2024-07-12 19:26:28.322165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.322476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.322507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.322959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.322989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.323231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.323260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.323700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.323730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.324191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.324223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.324671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.324700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.325082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.325111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.325542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.325572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.325807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.325835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.326086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.326115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.326569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.326599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.326841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.326869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.327101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.327141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.327603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.327633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.328069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.328099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.328516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.328548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.328991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.329021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.329259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.329289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.329675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.329705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.330164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.330194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.330643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.330672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.331116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.331155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.331649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.331679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.331931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.331959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.332389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.332421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.332869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.332899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.333338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.333380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.333840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.333869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.334200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.302 [2024-07-12 19:26:28.334237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.302 qpair failed and we were unable to recover it. 00:30:22.302 [2024-07-12 19:26:28.334679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.334708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.335147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.335178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.335640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.335669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.336040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.336070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.336548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.336578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.337039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.337069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.337492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.337524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.337975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.338006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.338259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.338290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.338714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.338744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.339197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.339230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.339708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.339738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.340019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.340048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.340472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.340502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.340952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.340981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.341306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.341340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.341582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.341610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.341930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.341959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.342422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.342453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.342898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.342927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.343380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.343411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.343924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.343953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.344255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.344289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.344751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.344781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.345211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.345243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.345454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.345482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.303 qpair failed and we were unable to recover it. 00:30:22.303 [2024-07-12 19:26:28.345931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.303 [2024-07-12 19:26:28.345962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.346378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.346408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.346860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.346890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.347375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.347405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.347834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.347863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.348319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.348350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.348796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.348826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.349155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.349185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.349652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.349682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.350135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.350166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.350616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.350646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.350952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.350990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.351428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.351458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.351883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.351912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.352027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.352057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.352295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.352324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.352740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.352769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.353215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.353246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.353665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.353694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.354144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.354174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.354458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.354489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.354944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.354974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.355384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.355415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.355858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.355888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.356309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.356339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.356590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.356618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.357063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.357092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.357425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.357456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.357779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.357813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.358215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.358245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.304 qpair failed and we were unable to recover it. 00:30:22.304 [2024-07-12 19:26:28.358697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.304 [2024-07-12 19:26:28.358727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.359154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.359185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.359645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.359676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.360118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.360156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.360651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.360683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.361137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.361169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.361499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.361532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.361987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.362018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.362385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.362418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.362864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.362895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.363343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.363372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.363804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.363833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.364267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.364298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.364613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.364645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.365067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.365096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.365531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.365561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.365993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.366023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.366452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.366483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.366858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.366887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.367201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.367232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.367685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.367714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.368168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.368205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.368442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.368470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.368922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.368952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.369379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.369410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.369544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.369575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.369864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.369896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.370331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.370362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.370800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.370830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.371278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.371308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.371557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.371586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.372011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.372040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.372481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.305 [2024-07-12 19:26:28.372512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.305 qpair failed and we were unable to recover it. 00:30:22.305 [2024-07-12 19:26:28.372927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.372957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.373419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.373450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.373885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.373915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.374159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.374189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.374566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.374597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.374904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.374934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.375412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.375444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.375886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.375915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.376326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.376356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.376814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.376843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.377278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.377309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.377815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.377845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.378106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.378144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.378581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.378611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.378995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.379025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.379267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.379297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.379744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.379774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.380057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.380089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.380504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.380535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.380965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.380994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.381433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.381464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.381916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.381946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.382380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.382410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.382724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.382757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.383163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.383194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.383666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.383697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.383829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.383858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.384277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.384308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.384759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.384795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.385234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.385264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.385697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.385727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.386038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.386068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.306 [2024-07-12 19:26:28.386418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.306 [2024-07-12 19:26:28.386449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.306 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.386901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.386931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.387193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.387222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.387649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.387679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.388001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.388032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.388340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.388374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.388807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.388837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.389074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.389102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.389564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.389594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.390038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.390068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.390538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.390569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.391000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.391031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.391523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.391554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.391998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.392028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.392466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.392496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.392927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.392959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.393408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.393438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.393885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.393914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.394443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.394550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.395054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.395092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.395625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.395659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.395892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.395920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.396331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.396363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.396827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.396860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.397307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.397340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.397709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.397741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.398185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.398216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.398688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.398718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.399136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.399167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.307 [2024-07-12 19:26:28.399503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.307 [2024-07-12 19:26:28.399532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.307 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.399848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.399877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.400322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.400353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.400769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.400798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.400962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.400996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.401342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.401380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.401724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.401755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.402060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.402099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.402547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.402578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.403006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.403035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.403495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.403526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.403991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.404021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.308 [2024-07-12 19:26:28.404344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.308 [2024-07-12 19:26:28.404378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.308 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.404836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.404867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.405312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.405341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.405720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.405749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.406119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.406159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.406664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.406693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.406985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.407016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.407447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.407477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.407605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.407632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.407894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.407926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.408352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.408383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.408827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.408857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.409309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.409340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.409771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.409801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.410237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.410268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.410724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.410754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.411261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.411293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.411725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.411755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.412198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.412230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.412535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.412569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.413038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.413068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.413415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.413446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.413910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.413940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.414395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.414425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.414748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.414780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.415211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.415241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.415699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.415727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.416177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.416208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.416696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.416726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.417162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.417192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.417632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.417661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.418073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.418103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.418539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.418569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.418991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.419021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.419453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.419484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.419941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.419982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.420487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.420518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.420802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.420830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.421267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-07-12 19:26:28.421296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-07-12 19:26:28.421531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.421559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.422007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.422037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.422470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.422499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.422814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.422849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.423301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.423335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.423641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.423670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.424103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.424145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.424609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.424640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.425143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.425174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.425501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.425532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.425855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.425886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.426266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.426297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.426739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.426768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.427225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.427256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.427696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.427725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.428141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.428172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.428444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.428474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.428893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.428923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.429356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.429387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.429836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.429866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.430320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.430350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.430770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.430799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.431235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.431265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.431658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.431688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.432095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.432134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.432551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.432580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.433018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.433048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.433289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.433322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.433733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.433763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.433879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.433906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.434347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.434377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.434825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.434854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.435142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.435173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.435465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.435496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.435942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.435973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.436506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.436539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.437020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.437056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.437302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.437331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.437759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.437789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.438225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-07-12 19:26:28.438256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-07-12 19:26:28.438687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.438717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.439162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.439192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.439559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.439589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.440042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.440071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.440377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.440410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.440872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.440902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.441386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.441416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.441655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.441684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.442064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.442093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.442523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.442554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.443075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.443105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.443364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.443394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.443845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.443875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.444255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.444286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.444697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.444726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.445159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.445189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.445669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.445697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.446182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.446211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.446711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.446741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.446998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.447027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.447458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.447490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.447961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.447992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.448411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.448442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.448760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.448791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.449205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.449236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.449482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.449510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.449946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.449974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.450393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.450425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.450748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.450785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.451203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.451233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.451668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.451697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.452146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.452177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.452644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.452675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.453142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.453175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.453613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.453642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.454080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.454109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.454356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.454385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.454841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.454872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.455356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.455388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.455824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.455854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.456270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-07-12 19:26:28.456299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-07-12 19:26:28.456744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.456773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.457081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.457114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.457582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.457612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.458070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.458099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.458550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.458580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.458795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.458823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.459234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.459265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.459745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.459773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.460015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.460044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.460280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.460311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.460735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.460764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.461214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.461245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.461708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.461739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.461978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.462008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.462332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.462364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.462824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.462853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.463184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.463215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.463654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.463684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.464208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.464238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.464694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.464723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.465022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.465055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.465466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.465497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.465930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.465966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.466421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.466450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.466906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.466935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.467388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.467420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.467858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.467888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.468150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.468181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.468712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.468742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.469175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.469206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.469654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.469683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.469941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.469969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.470421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.470452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.470886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.470915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.471241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.471276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.471744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.471774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.472230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.472261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.472718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.472746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.473185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.473215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.473663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.473693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-07-12 19:26:28.473823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-07-12 19:26:28.473851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.474217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.474247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.474681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.474710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.475133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.475164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.475646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.475676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.476150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.476181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.476603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.476631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.477083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.477112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.477542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.477573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.478004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.478034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.478475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.478505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.478990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.479021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.479465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.479495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.479923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.479954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.480213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.480245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.480687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.480716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.480998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.481029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.481463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.481494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.481926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.481956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.482408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.482439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.482888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.482917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.483167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.483196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.483517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.483557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.484023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.484052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.484519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.484550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.484983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.485013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.485470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.485500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.485920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.485950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.486400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.486432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.486864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.486893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.487103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.487141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.487602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.487631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.488078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.488107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-07-12 19:26:28.488351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-07-12 19:26:28.488380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.488817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.488846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.489297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.489328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.489779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.489809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.490319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.490349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.490664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.490694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.491056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.491086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.491328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.491357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.491729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.491759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.492188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.492218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.492674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.492704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.492950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.492978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.493406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.493436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.493758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.493791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.494212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.494245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.494709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.494740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.495068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.495102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.495534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.495564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.495805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.495833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.496251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.496280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.496547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.496576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.497001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.497030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.497447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.497478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.497766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.497795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.498227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.498257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.498706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.498735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.498989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.499017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.499459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.499490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.499926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.499956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.500272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.500311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.500563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.500593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.501003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.501032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.501471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.501501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.501929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.501960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.502409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.502441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.502762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.502795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.503229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.503260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.503708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.503737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.504027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.504056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.504381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.504411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-07-12 19:26:28.504887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-07-12 19:26:28.504918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.505356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.505388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.505870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.505899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.506355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.506386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.506814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.506845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.507275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.507307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.507721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.507751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.508147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.508177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.508616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.508645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.509078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.509107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.509443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.509474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.509924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.509953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.510194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.510225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.510669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.510700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.511141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.511173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.511627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.511659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.511896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.511925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.512372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.512403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.512854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.512882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.513313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.513344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.513777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.513806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.514089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.514120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.514586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.514616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.515079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.515112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.515461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.515491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.515623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.515652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.516096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.516135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.516560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.516591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.517014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.517047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.517486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.517525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.517766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.517799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.518100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.518150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.518578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.518608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.518897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.518928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.519182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.519212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.519625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.519657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.520093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.520129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.520404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.520439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.520848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.520876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.521328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.521360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-07-12 19:26:28.521802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-07-12 19:26:28.521832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.522272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.522303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.522715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.522747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.523119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.523160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.523473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.523507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.523799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.523830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.524086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.524115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.524553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.524582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.525019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.525048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.525527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.525561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.526013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.526043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.526307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.526335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.526831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.526862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.527302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.527332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.527641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.527672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.528086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.528115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.528378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.528408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.528790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.528820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.529061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.529089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.529398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.529431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.529695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.529724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.530168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.530199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.530700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.530729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.531145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.531176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.531609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.531640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.532077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.532108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.532346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.532375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.532826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.532857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.533280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.533311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.533741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.533777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.534218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.534250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.534502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.534531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.534776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.534806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.535244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.535275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.535511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.535540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.535966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.535995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.536454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.536486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.536886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.536918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.537355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.537385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.537829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.537859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.538144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-07-12 19:26:28.538174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-07-12 19:26:28.538656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.538684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.539159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.539191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.539436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.539464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.539748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.539780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.540234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.540265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.540696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.540726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.541174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.541204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.541447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.541475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.541965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.541995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.542510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.542540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.542988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.543018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.543460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.543492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.543933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.543963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.544414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.544446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.544892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.544922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.545355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.545386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.545640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.545668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.545967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.545998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.546436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.546466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.546776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.546809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.547270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.547301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.547695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.547725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.548159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.548189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.548625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.548655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.548889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.548917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.549368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.549398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.549863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.549893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.550153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.550182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.550346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.550383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.550860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.550890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.551217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.551248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.551508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.551538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.551969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.551999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.552428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.552459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-07-12 19:26:28.552902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-07-12 19:26:28.552931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.553372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.553403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.553769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.553798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.554053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.554081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.554409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.554439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.554877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.554907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.555368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.555398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.555851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.555880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.556132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.556161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.556592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.556622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.556865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.556893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.557334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.557365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.557822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.557852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.558160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.558192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.558643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.558672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.559085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.559114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.559534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.559565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.559998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.560027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.560471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.560502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.560801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.560832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.561251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.561282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.561523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.561552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.561804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.561835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.562292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.562324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.562554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.562583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.563032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.563062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.563343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.563375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.563816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.563845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.564300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.564330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.564840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.564870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.565308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.565339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.565584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.565612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.566011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.566040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.566458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.566490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.566621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.566657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.567101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.567146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.567390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.567418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.567664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.567693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.568121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.568159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.568562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.568592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.569059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-07-12 19:26:28.569090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-07-12 19:26:28.569540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.569572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.570009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.570040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.570368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.570398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.570852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.570881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.571325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.571356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.571789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.571818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.572232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.572263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.572722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.572752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.573202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.573232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.573735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.573765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.574220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.574251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.574719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.574751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.575072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.575102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.575515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.575545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.575988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.576017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.576166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.576198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.576631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.576662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.576979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.577009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.577370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.577401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.577796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.577825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.578153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.578185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.578611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.578642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.578924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.578952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.579236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.579266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.579493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.579521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.579970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.580000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.580425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.580457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.580783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.580816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.581248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.581279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.581719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.581749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.582204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.582235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.582652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.582682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.583117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.583167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.583593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.583630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.584082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.584111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.584562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.584591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.584920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.584954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.585384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.585415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.585874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.585904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.586367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.586400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-07-12 19:26:28.586822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-07-12 19:26:28.586853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.587293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.587323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.587735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.587765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.588232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.588263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.588714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.588743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.589031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.589064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.589523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.589556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.589819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.589848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.590206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.590237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.590672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.590702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.591147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.591178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.591501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.591536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.591963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.591992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.592453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.592487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.592933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.592964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.593222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.593251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.593703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.593734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.594014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.594043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.594460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.594491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.594905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.594934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.595372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.595403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.595837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.595869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.596317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.596348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.596789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.596818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.597208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.597239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.597692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.597721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.598145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.598177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.598634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.598662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.598944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.598974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.599459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.599491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.599821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.599854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.600280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.600310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.600747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.600776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.600890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.600924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.601327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.601359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.601819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.601848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.602232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.602263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.602565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.602595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.603044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.603073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.603521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.603551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.603883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-07-12 19:26:28.603913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-07-12 19:26:28.604348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.604379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.604624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.604653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.605112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.605150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.605599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.605628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.606074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.606104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.606549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.606581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.606911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.606941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.607376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.607407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.607770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.607800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.608237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.608269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.608738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.608767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.609050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.609082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.609518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.609548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.610004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.610034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.610456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.610486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.610920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.610949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.611379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.611410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.611761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.611792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.612241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.612272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.612719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.612751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.613237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.613268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.613717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.613747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.614000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.614032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.614464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.614493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.614933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.614962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.615282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.615314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.615563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.615593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.616053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.616082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.616455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.616484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.616885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.616915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.617352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.617383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.617633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.617661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.617984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.618021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.618438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.618471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.618918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.618948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.619384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.619415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.619848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-07-12 19:26:28.619878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-07-12 19:26:28.620314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.620344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.620780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.620811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.621259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.621290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.621735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.621764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.622178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.622208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.622593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.622623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.623051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.623082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.623464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.623496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.623961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.623990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.624406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.624436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.624877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.624907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.625345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.625375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.625821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.625850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.626274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.626305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.626740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.626770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.627217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.627249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.627576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.627606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.628047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.628077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.628533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.628565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.628996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.629025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.629446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.629478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.629930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.629958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.630280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.630313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.630742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.630772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.631138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.631169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.631593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.631622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.631929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.631963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.632222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.632253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.632683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.632712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.633136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.633167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.633602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.633631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.634071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.634101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.634566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.634595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.635031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.635061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.635377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.635408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.635842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-07-12 19:26:28.635879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-07-12 19:26:28.636325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.636355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.636625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.636653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.637080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.637111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.637553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.637583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.637870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.637898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.638352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.638383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.638821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.638851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.639231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.639261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.639701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.639730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.640184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.640215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.640696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.640726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.641239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.641270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.641716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.641746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.642205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.642237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.642687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.642716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.642999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.643030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.643445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.643476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.643931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.643960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.644141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.644172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.644610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.644640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.645090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.645119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.645572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.645601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.646038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.646068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.646463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.646495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.646910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.646941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.647390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.647421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.647850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.647880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.648324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.648354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.648595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.648623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.648873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.648902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.649225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.649259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.649500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.649531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.649993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.650022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.650432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.650463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.650699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.650729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.651134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.651166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.651627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.651657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.652107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.652149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.652617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.652647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.652888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.652925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-07-12 19:26:28.653382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-07-12 19:26:28.653415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.653743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.653773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.654208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.654238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.654683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.654712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.654954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.654982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.655392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.655422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.655854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.655883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.656128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.656159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.656433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.656463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.656915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.656944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.657384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.657415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.657849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.657878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.658334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.658366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.658856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.658887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.659136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.659167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.659608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.659638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.659883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.659912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.660488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.660592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.661095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.661160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.661590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.661621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.662066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.662097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.662433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.662469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.662704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.662733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.663155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.663187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.663430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.663459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.663881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.663911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.664357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.664390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.664507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.664534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.664937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.664966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.665278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.665308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.665750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.665780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.666024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.666052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.666301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.666332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.666766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.666795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.667271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.667303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.667551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.667581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.667836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.667865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.668286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.668319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.668570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.668599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.669026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.669057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-07-12 19:26:28.669521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-07-12 19:26:28.669554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.669876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.669910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.670279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.670310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.670705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.670735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.671169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.671200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.671650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.671681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.671809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.671836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.672264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.672295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.672742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.672772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.673189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.673221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.673663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.673694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.674134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.674165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.674634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.674663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.675082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.675112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.675375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.675406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.675732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.675763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.676256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.676288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.676741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.676771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.677212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.677242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.677691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.677721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.678006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.678037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.678460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.678490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.678970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.678999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.679462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.679493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.679733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.679762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.680173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.680206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.680660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.680696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.681144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.681175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.681613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.681643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.681835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.681865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.682281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.682311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.682748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.682778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.683223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.683253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.683498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.683526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.683951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.683982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.684259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.684289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.684730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.684761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.685190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.685221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.685468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.685496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.685937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.685968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-07-12 19:26:28.686208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-07-12 19:26:28.686239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.686480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.686508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.686968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.686998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.687401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.687435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.687690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.687719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.688160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.688190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.688481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.688509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.688801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.688830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.689243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.689273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.689726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.689759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.690191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.690222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.690666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.690696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.691154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.691184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.691607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.691637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.692154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.692187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.692616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.692647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.693105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.693144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.693548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.693580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.693876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.693904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.694326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.694357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.694778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.694807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.695070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.695098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.695357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.695388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.695818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.695848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.696311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.696344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.696775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.696806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.697055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.697092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.697608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.697640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.698092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.698131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.698578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-07-12 19:26:28.698608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-07-12 19:26:28.699032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.699062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.699495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.699526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.699946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.699975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.700310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.700350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.700814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.700844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.701105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.701153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.701598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.701629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.701962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.701992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.702434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.702465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.702582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.702611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.703035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.703067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.703514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.703544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.703868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.703896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.704147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.704178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.704505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.704534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.704980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.705011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.705502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.705533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.705966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.705996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.706428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.706460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.706909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.706939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.707365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.707400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.707794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.707824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.708147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.708181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.708431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.708462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.708968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.708999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.709328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.709360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.709811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.709841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.868 [2024-07-12 19:26:28.710258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.868 [2024-07-12 19:26:28.710288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.868 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.710611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.710638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.711071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.711101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.711516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.711547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.711664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.711691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.712000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.712031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.712474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.712506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.712915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.712947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.713230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.713261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.713701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.713737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.714053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.714084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.714540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.714572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.715021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.715053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.715308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.715339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.715821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.715851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.716142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.716174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.716483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.716513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.716643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.716671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.717091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.717120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.717584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.717613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.718065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.718095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.718418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.718454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.718687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.718717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.719172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.719205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.719650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.719680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.720096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.720134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.720396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.720425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.720878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.720908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.721236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.721271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.721724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.721755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.722208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.722239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.722687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.722717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.723006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.723035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.723294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.723325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.869 qpair failed and we were unable to recover it. 00:30:22.869 [2024-07-12 19:26:28.723761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.869 [2024-07-12 19:26:28.723791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.724206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.724237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.724470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.724502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.724932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.724962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.725469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.725500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.725950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.725981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.726418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.726448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.726729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.726760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.727189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.727219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.727665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.727695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.727926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.727954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.728249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.728282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.728725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.728757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.729168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.729199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.729482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.729510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.729939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.729977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.730388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.730419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.730825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.730854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.731305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.731336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.731771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.731801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.731915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.731942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.732241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.732276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.732668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.732700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.733131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.733163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.733617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.733647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.733892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.733920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.734241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.734276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.734698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.734728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.735181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.735212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.735510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.735539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.735845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.735874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.736021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.736052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.736464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.736493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.736929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.736959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.737409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.870 [2024-07-12 19:26:28.737439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.870 qpair failed and we were unable to recover it. 00:30:22.870 [2024-07-12 19:26:28.737818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.737849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.738285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.738316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.738796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.738825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.739279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.739310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.739730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.739760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.740240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.740271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.740518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.740548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.740881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.740913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.741320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.741351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.741792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.741822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.742261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.742290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.742544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.742572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.743061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.743091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.743538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.743568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.744008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.744037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.744409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.744439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.744890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.744920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.745176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.745205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.745633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.745663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.746131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.746163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.746643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.746678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.747162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.747191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.747635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.747664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.748117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.748155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.748635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.748664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.749161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.749191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.749635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.749665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.750113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.750161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.750593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.750624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.751076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.751106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.751572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.751603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.752047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.752077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.752533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.752563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.752803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.752831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.753267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.753298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.753717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.753747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.754074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.754104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.871 [2024-07-12 19:26:28.754401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.871 [2024-07-12 19:26:28.754431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.871 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.754672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.754699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.755167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.755197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.755533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.755566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.756019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.756049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.756390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.756422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.756891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.756920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.757367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.757397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.757862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.757892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.758332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.758363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.758797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.758827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.759273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.759303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.759734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.759764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.760201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.760231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.760677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.760706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.760948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.760976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.761424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.761457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.761874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.761904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.762356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.762385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.762849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.762880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.763328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.763358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.763649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.763677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.764166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.764196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.764653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.764688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.765074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.765103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.765480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.765510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.765963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.765991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.766437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.766467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.766920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.766950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.767380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.767410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.767822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.767852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.768107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.768143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.768624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.768653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.768984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.769018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.769456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.872 [2024-07-12 19:26:28.769486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.872 qpair failed and we were unable to recover it. 00:30:22.872 [2024-07-12 19:26:28.769900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.769929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.770422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.770452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.770822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.770853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.771302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.771332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.771520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.771553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.771994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.772025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.772466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.772496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.772758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.772786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:22.873 [2024-07-12 19:26:28.773199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.773233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:22.873 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:22.873 [2024-07-12 19:26:28.773714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.773746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.873 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.873 [2024-07-12 19:26:28.774183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.774217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.774684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.774716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.774998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.775028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.775488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.775520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.775841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.775872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.776279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.776308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.776722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.776751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.777204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.777236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.777725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.777756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.778202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.778232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.778645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.778674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.779129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.779161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.779629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.779658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.780069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.780100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.780442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.873 [2024-07-12 19:26:28.780471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.873 qpair failed and we were unable to recover it. 00:30:22.873 [2024-07-12 19:26:28.780920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.780950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.781197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.781233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.781650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.781682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.782138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.782169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.782601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.782632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.783074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.783104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.783437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.783469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.783925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.783956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.784477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.784580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.785078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.785119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.785390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.785420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.785658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.785694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.786154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.786189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.786714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.786745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.787189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.787220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.787689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.787720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.788163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.788197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.788661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.788692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.789143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.789177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.789439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.789468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.789953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.789984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.790403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.790433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.790883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.790915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.791339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.791371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.791811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.791843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.792281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.792312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.792769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.792799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.793256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.793290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.793743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.793775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.794197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.794229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.794622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.794653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.795098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.795139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.795582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.795611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.796039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.796070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.874 [2024-07-12 19:26:28.796527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.874 [2024-07-12 19:26:28.796558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.874 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.797005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.797036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.797321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.797352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.797605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.797635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.797944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.797977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.798423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.798454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.798896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.798926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.799324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.799361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.799682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.799713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.799993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.800022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.800441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.800474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.800795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.800824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.801277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.801308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.801749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.801782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.802233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.802265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.802698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.802728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.802995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.803023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.803442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.803472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.803925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.803955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.804382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.804413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.804862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.804894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.805340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.805372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.805524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.805557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.805864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.805894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.806342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.806373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.806814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.806844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.807211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.807243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.807586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.807615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.808029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.808063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.808493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.808525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.808845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.808877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.809310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.809341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.809826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.809856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.810111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.810147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.810446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.810477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.810923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.875 [2024-07-12 19:26:28.810952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.875 qpair failed and we were unable to recover it. 00:30:22.875 [2024-07-12 19:26:28.811405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.811436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.811877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.811908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.812321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.812352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.812798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.812829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.813301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.813335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.876 [2024-07-12 19:26:28.813787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.813819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:22.876 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.876 [2024-07-12 19:26:28.814243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.814275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.876 [2024-07-12 19:26:28.814747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.814778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.815229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.815259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.815731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.815760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.816179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.816210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.816636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.816666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.816988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.817026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.817494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.817525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.817988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.818018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.818281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.818310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.818677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.818706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.818951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.818979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.819374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.819406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.819839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.819869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.820350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.820380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.820827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.820856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.821151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.821181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.821509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.821540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.821991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.822019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.822448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.822479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.822789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.822825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.823274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.823304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.823746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.823776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.824234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.824265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.824735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.824764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.825209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.876 [2024-07-12 19:26:28.825240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.876 qpair failed and we were unable to recover it. 00:30:22.876 [2024-07-12 19:26:28.825688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.825717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.826142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.826172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.826622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.826652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.826966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.827001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.827352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.827407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.827892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.827921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.828363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.828394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.828907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.828937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.829357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.829388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.829651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.829680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.830141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.830172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.830621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.830651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.831103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.831161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.831681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.831711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.832197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.832250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.832541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.832570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.833034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.833063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.833513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.833544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.833995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.834024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.834474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.834505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.834999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.835029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.835498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.835528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.835983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.836012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.836458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.836489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.836939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.836969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.837244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.837273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.837730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.837762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.838141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.838172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.838640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.838670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.838960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.838988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 Malloc0 00:30:22.877 [2024-07-12 19:26:28.839320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.839355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 [2024-07-12 19:26:28.839792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.877 [2024-07-12 19:26:28.839823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.877 qpair failed and we were unable to recover it. 00:30:22.877 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.877 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:22.877 [2024-07-12 19:26:28.840285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.840319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.840554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.878 [2024-07-12 19:26:28.840584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.878 [2024-07-12 19:26:28.840841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.840872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.841314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.841346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.841791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.841822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.842267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.842298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.842612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.842642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.842942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.842974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.843381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.843412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.843735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.843767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.844211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.844241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.844704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.844734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.844988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.845017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.845458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.845489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.845934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.845965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.846363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.846361] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.878 [2024-07-12 19:26:28.846394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.846835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.846867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.847194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.847228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.847567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.847598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.847927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.847956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.848231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.848261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.848715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.848745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.849196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.849226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.849685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.849715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.850164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.850195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.850523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.850555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.851001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.851030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.851413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.878 [2024-07-12 19:26:28.851443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.878 qpair failed and we were unable to recover it. 00:30:22.878 [2024-07-12 19:26:28.851918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.851947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.852297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.852327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.852792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.852822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.853260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.853291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.853604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.853640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.854105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.854144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.854319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.854347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.854777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.854808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.855060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.855088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.879 [2024-07-12 19:26:28.855567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.855600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:22.879 [2024-07-12 19:26:28.855866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.855894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.879 [2024-07-12 19:26:28.856167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.856196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.879 [2024-07-12 19:26:28.856666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.856698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.857221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.857253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.857734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.857763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.858208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.858240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.858674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.858704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.859174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.859205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.859632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.859661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.860147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.860177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.860633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.860663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.861120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.861159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.861592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.861621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.862032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.862061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.862513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.862544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.862991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.863020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.863456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.863487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.863756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.863787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.864237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.864269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.864655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.864687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.879 [2024-07-12 19:26:28.865138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.879 [2024-07-12 19:26:28.865168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.879 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.865623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.865653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.866096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.866134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.866584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.866614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.867115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.867154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.880 [2024-07-12 19:26:28.867488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.867520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.880 [2024-07-12 19:26:28.867991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.868021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.880 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.880 [2024-07-12 19:26:28.868486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.868517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.868939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.868968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.869426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.869456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.869913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.869943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.870252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.870281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.870728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.870758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.871201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.871231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.871535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.871566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.871986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.872018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.872377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.872412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.872849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.872880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.873244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.873275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.873529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.873558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.873791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.873820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.874244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.874275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.874730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.874761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.875162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.875192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.875673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.875705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.876010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.876046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.876516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.876547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.876867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.876900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.877325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.877356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.877793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.877823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.878099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.878137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.878584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.878613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 [2024-07-12 19:26:28.879081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.879110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.880 qpair failed and we were unable to recover it. 00:30:22.880 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.880 [2024-07-12 19:26:28.879609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.880 [2024-07-12 19:26:28.879639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.879881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.881 [2024-07-12 19:26:28.879910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.881 [2024-07-12 19:26:28.880238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.880269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.881 [2024-07-12 19:26:28.880719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.880749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.881241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.881272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.881729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.881760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.882213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.882244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.882661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.882691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.883135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.883165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.883629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.883659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.884132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.884163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.884624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.884654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.885089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.885119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.885447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.885483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.885869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.885898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.886436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-07-12 19:26:28.886544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9bfc000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.886730] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.881 [2024-07-12 19:26:28.897597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.881 [2024-07-12 19:26:28.897835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.881 [2024-07-12 19:26:28.897901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.881 [2024-07-12 19:26:28.897927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.881 [2024-07-12 19:26:28.897948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.881 [2024-07-12 19:26:28.898005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.881 19:26:28 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1619874 00:30:22.881 [2024-07-12 19:26:28.907459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.881 [2024-07-12 19:26:28.907625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.881 [2024-07-12 19:26:28.907666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.881 [2024-07-12 19:26:28.907683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.881 [2024-07-12 19:26:28.907697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.881 [2024-07-12 19:26:28.907733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.917402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.881 [2024-07-12 19:26:28.917521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.881 [2024-07-12 19:26:28.917553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.881 [2024-07-12 19:26:28.917565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.881 [2024-07-12 19:26:28.917575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.881 [2024-07-12 19:26:28.917602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.927371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.881 [2024-07-12 19:26:28.927475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.881 [2024-07-12 19:26:28.927501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.881 [2024-07-12 19:26:28.927509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.881 [2024-07-12 19:26:28.927516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.881 [2024-07-12 19:26:28.927537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.937298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.881 [2024-07-12 19:26:28.937400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.881 [2024-07-12 19:26:28.937426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.881 [2024-07-12 19:26:28.937434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.881 [2024-07-12 19:26:28.937442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.881 [2024-07-12 19:26:28.937462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-07-12 19:26:28.947392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.881 [2024-07-12 19:26:28.947485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.881 [2024-07-12 19:26:28.947512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.882 [2024-07-12 19:26:28.947521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.882 [2024-07-12 19:26:28.947528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.882 [2024-07-12 19:26:28.947548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-07-12 19:26:28.957436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.882 [2024-07-12 19:26:28.957537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.882 [2024-07-12 19:26:28.957562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.882 [2024-07-12 19:26:28.957573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.882 [2024-07-12 19:26:28.957580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.882 [2024-07-12 19:26:28.957599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-07-12 19:26:28.967415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.882 [2024-07-12 19:26:28.967511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.882 [2024-07-12 19:26:28.967541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.882 [2024-07-12 19:26:28.967551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.882 [2024-07-12 19:26:28.967558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.882 [2024-07-12 19:26:28.967579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-07-12 19:26:28.977559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.882 [2024-07-12 19:26:28.977668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.882 [2024-07-12 19:26:28.977694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.882 [2024-07-12 19:26:28.977703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.882 [2024-07-12 19:26:28.977710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.882 [2024-07-12 19:26:28.977730] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-07-12 19:26:28.987380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:22.882 [2024-07-12 19:26:28.987474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:22.882 [2024-07-12 19:26:28.987502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:22.882 [2024-07-12 19:26:28.987519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:22.882 [2024-07-12 19:26:28.987526] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:22.882 [2024-07-12 19:26:28.987548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.882 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:28.997493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:28.997596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:28.997623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:28.997631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:28.997639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:28.997658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.007524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.007617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:29.007642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:29.007651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:29.007658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:29.007680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.017604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.017741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:29.017771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:29.017780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:29.017786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:29.017807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.027503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.027676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:29.027702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:29.027710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:29.027718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:29.027738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.037638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.037738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:29.037777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:29.037788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:29.037795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:29.037820] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.047679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.047823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:29.047864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:29.047874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:29.047881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:29.047907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.057749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.057902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:29.057942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:29.057952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:29.057959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:29.057985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.067807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.067915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:29.067943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:29.067951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:29.067958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:29.067979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.077750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.077833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.145 [2024-07-12 19:26:29.077859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.145 [2024-07-12 19:26:29.077875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.145 [2024-07-12 19:26:29.077882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.145 [2024-07-12 19:26:29.077901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.145 qpair failed and we were unable to recover it. 00:30:23.145 [2024-07-12 19:26:29.087805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.145 [2024-07-12 19:26:29.087894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.087920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.087928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.087936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.087955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.097803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.097907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.097932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.097940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.097948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.097969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.107780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.107908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.107933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.107941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.107948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.107967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.117928] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.118021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.118048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.118057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.118064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.118084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.127884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.127983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.128009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.128019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.128026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.128045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.138012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.138114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.138146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.138155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.138163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.138185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.147893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.148015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.148044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.148053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.148061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.148082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.158091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.158241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.158269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.158277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.158284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.158305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.168316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.168421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.168454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.168463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.168470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.168489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.178055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.178183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.178212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.178222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.178229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.178250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.188181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.188268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.188294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.188302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.188310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.188330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.198198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.198330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.198357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.198366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.198373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.198396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.208153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.208245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.208271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.208281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.208288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.208315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.218218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.218337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.218363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.218372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.146 [2024-07-12 19:26:29.218379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.146 [2024-07-12 19:26:29.218399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.146 qpair failed and we were unable to recover it. 00:30:23.146 [2024-07-12 19:26:29.228221] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.146 [2024-07-12 19:26:29.228337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.146 [2024-07-12 19:26:29.228364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.146 [2024-07-12 19:26:29.228373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.147 [2024-07-12 19:26:29.228380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.147 [2024-07-12 19:26:29.228399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.147 qpair failed and we were unable to recover it. 00:30:23.147 [2024-07-12 19:26:29.238265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.147 [2024-07-12 19:26:29.238349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.147 [2024-07-12 19:26:29.238374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.147 [2024-07-12 19:26:29.238384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.147 [2024-07-12 19:26:29.238391] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.147 [2024-07-12 19:26:29.238411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.147 qpair failed and we were unable to recover it. 00:30:23.147 [2024-07-12 19:26:29.248247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.147 [2024-07-12 19:26:29.248335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.147 [2024-07-12 19:26:29.248361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.147 [2024-07-12 19:26:29.248370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.147 [2024-07-12 19:26:29.248377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.147 [2024-07-12 19:26:29.248397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.147 qpair failed and we were unable to recover it. 00:30:23.147 [2024-07-12 19:26:29.258299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.147 [2024-07-12 19:26:29.258431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.147 [2024-07-12 19:26:29.258464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.147 [2024-07-12 19:26:29.258473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.147 [2024-07-12 19:26:29.258479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.147 [2024-07-12 19:26:29.258499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.147 qpair failed and we were unable to recover it. 00:30:23.147 [2024-07-12 19:26:29.268395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.147 [2024-07-12 19:26:29.268512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.147 [2024-07-12 19:26:29.268538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.147 [2024-07-12 19:26:29.268547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.147 [2024-07-12 19:26:29.268554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.147 [2024-07-12 19:26:29.268574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.147 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.278327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.278414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.278441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.278450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.278457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.278478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.288357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.288452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.288478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.288486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.288493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.288512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.298397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.298497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.298523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.298531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.298545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.298564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.308426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.308518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.308543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.308551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.308558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.308577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.318499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.318592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.318618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.318627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.318634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.318653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.328521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.328608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.328634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.328643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.328652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.328670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.338556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.338664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.338696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.338705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.338711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.338733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.348573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.348665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.348705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.348716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.348725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.348750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.358661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.358764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.358791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.358801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.358808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.358830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.368535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.368628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.368655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.368664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.368671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.368691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.378703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.378805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.378831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.378841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.378847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.378868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.388690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.388783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.388809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.388818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.388832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.388852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.398745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.398875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.398901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.398909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.408 [2024-07-12 19:26:29.398916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.408 [2024-07-12 19:26:29.398936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.408 qpair failed and we were unable to recover it. 00:30:23.408 [2024-07-12 19:26:29.408761] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.408 [2024-07-12 19:26:29.408860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.408 [2024-07-12 19:26:29.408892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.408 [2024-07-12 19:26:29.408901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.408908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.408929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.418796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.418901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.418928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.418936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.418942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.418964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.428752] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.428844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.428870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.428879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.428886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.428906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.438833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.438930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.438957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.438965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.438972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.438992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.448932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.449056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.449082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.449090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.449096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.449116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.458914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.459027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.459054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.459063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.459070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.459089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.468957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.469042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.469067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.469077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.469083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.469103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.478929] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.479014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.479041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.479060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.479067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.479086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.488996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.489083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.489110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.489119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.489134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.489154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.499050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.499150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.499177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.499187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.499193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.499213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.509055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.509178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.509204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.509213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.509219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.509239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.519179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.519313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.519340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.519348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.519355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.519374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.409 [2024-07-12 19:26:29.529151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.409 [2024-07-12 19:26:29.529256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.409 [2024-07-12 19:26:29.529282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.409 [2024-07-12 19:26:29.529292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.409 [2024-07-12 19:26:29.529302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.409 [2024-07-12 19:26:29.529323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.409 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.539179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.539278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.539305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.539313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.539321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.539341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.549177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.549266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.549290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.549299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.549305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.549325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.559237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.559327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.559353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.559363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.559371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.559391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.569285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.569377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.569409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.569417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.569424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.569443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.579324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.579436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.579462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.579471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.579479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.579498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.589382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.589477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.589502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.589512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.589519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.589539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.599368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.599460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.599485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.599494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.599501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.599520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.609374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.609465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.609490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.609499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.609506] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.609533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.619447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.619543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.619568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.619577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.619584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.619603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.629444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.629538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.629564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.629572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.629579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.629598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.639454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.639566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.639592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.639600] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.639607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.639627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.649496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.649587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.671 [2024-07-12 19:26:29.649612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.671 [2024-07-12 19:26:29.649620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.671 [2024-07-12 19:26:29.649627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.671 [2024-07-12 19:26:29.649645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.671 qpair failed and we were unable to recover it. 00:30:23.671 [2024-07-12 19:26:29.659535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.671 [2024-07-12 19:26:29.659632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.659665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.659673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.659681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.659700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.669557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.669671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.669699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.669707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.669714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.669735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.679572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.679668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.679697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.679706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.679713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.679734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.689627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.689725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.689765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.689775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.689782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.689808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.699684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.699795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.699835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.699846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.699860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.699887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.709697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.709787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.709817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.709826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.709833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.709856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.719708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.719889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.719916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.719925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.719932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.719952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.729744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.729838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.729865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.729874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.729881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.729902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.739770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.739900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.739926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.739935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.739942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.739962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.749751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.749850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.749877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.749886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.749893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.749914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.759818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.759915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.759942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.759950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.759957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.759977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.769860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.769949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.769975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.769984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.769991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.770011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.779909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.780016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.780042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.780050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.780058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.780078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.672 [2024-07-12 19:26:29.789875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.672 [2024-07-12 19:26:29.789973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.672 [2024-07-12 19:26:29.789999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.672 [2024-07-12 19:26:29.790008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.672 [2024-07-12 19:26:29.790022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.672 [2024-07-12 19:26:29.790042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.672 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.799922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.800016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.800043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.800052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.800059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.800079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.809947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.810051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.810077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.810086] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.810093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.810112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.820012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.820110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.820146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.820154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.820161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.820182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.830030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.830120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.830157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.830167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.830174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.830197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.839980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.840106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.840145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.840155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.840162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.840184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.850078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.850177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.850204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.850212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.850220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.850243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.860079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.860179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.860207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.860216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.860223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.860243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.870152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.870275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.870300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.870309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.870317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.870337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.880182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.880271] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.880296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.880312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.880321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.880341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.890180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.890269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.890293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.890302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.890314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.890335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.900258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.935 [2024-07-12 19:26:29.900367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.935 [2024-07-12 19:26:29.900393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.935 [2024-07-12 19:26:29.900402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.935 [2024-07-12 19:26:29.900409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.935 [2024-07-12 19:26:29.900429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.935 qpair failed and we were unable to recover it. 00:30:23.935 [2024-07-12 19:26:29.910226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.910327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.910354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.910364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.910371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.910391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:29.920173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.920264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.920289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.920298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.920305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.920327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:29.930331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.930424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.930451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.930460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.930467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.930487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:29.940393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.940500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.940526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.940535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.940543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.940562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:29.950290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.950412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.950437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.950446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.950452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.950472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:29.960402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.960493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.960519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.960527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.960535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.960555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:29.970463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.970553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.970585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.970594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.970601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.970620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:29.980466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.980574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.980601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.980610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.980616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.980636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:29.990517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:29.990606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:29.990634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:29.990643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:29.990650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:29.990669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:30.000519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:30.000599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:30.000625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:30.000634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:30.000641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:30.000661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:30.010584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:30.010686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:30.010714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:30.010723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:30.010730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:30.010757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:30.020584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:30.020690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:30.020716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:30.020726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:30.020733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:30.020752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:30.030693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:30.030789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:30.030822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:30.030833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:30.030841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:30.030866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:30.040683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:30.040780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:30.040820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.936 [2024-07-12 19:26:30.040832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.936 [2024-07-12 19:26:30.040840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.936 [2024-07-12 19:26:30.040867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.936 qpair failed and we were unable to recover it. 00:30:23.936 [2024-07-12 19:26:30.050673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.936 [2024-07-12 19:26:30.050771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.936 [2024-07-12 19:26:30.050810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.937 [2024-07-12 19:26:30.050821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.937 [2024-07-12 19:26:30.050829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.937 [2024-07-12 19:26:30.050856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.937 qpair failed and we were unable to recover it. 00:30:23.937 [2024-07-12 19:26:30.060711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:23.937 [2024-07-12 19:26:30.060825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:23.937 [2024-07-12 19:26:30.060872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:23.937 [2024-07-12 19:26:30.060883] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:23.937 [2024-07-12 19:26:30.060891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:23.937 [2024-07-12 19:26:30.060918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:23.937 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.070751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.200 [2024-07-12 19:26:30.070853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.200 [2024-07-12 19:26:30.070893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.200 [2024-07-12 19:26:30.070903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.200 [2024-07-12 19:26:30.070911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.200 [2024-07-12 19:26:30.070937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.200 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.080709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.200 [2024-07-12 19:26:30.080803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.200 [2024-07-12 19:26:30.080830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.200 [2024-07-12 19:26:30.080840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.200 [2024-07-12 19:26:30.080847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.200 [2024-07-12 19:26:30.080867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.200 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.090685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.200 [2024-07-12 19:26:30.090860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.200 [2024-07-12 19:26:30.090886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.200 [2024-07-12 19:26:30.090895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.200 [2024-07-12 19:26:30.090902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.200 [2024-07-12 19:26:30.090922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.200 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.100834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.200 [2024-07-12 19:26:30.100929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.200 [2024-07-12 19:26:30.100955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.200 [2024-07-12 19:26:30.100964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.200 [2024-07-12 19:26:30.100971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.200 [2024-07-12 19:26:30.101000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.200 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.110770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.200 [2024-07-12 19:26:30.110866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.200 [2024-07-12 19:26:30.110891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.200 [2024-07-12 19:26:30.110899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.200 [2024-07-12 19:26:30.110909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.200 [2024-07-12 19:26:30.110930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.200 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.120818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.200 [2024-07-12 19:26:30.120903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.200 [2024-07-12 19:26:30.120934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.200 [2024-07-12 19:26:30.120943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.200 [2024-07-12 19:26:30.120950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.200 [2024-07-12 19:26:30.120973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.200 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.130970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.200 [2024-07-12 19:26:30.131061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.200 [2024-07-12 19:26:30.131088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.200 [2024-07-12 19:26:30.131097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.200 [2024-07-12 19:26:30.131103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.200 [2024-07-12 19:26:30.131134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.200 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.140915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.200 [2024-07-12 19:26:30.141060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.200 [2024-07-12 19:26:30.141087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.200 [2024-07-12 19:26:30.141095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.200 [2024-07-12 19:26:30.141103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.200 [2024-07-12 19:26:30.141131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.200 qpair failed and we were unable to recover it. 00:30:24.200 [2024-07-12 19:26:30.151017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.151105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.151137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.151146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.151155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.151175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.160970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.161059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.161085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.161094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.161101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.161120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.171023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.171136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.171163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.171171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.171178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.171199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.181057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.181164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.181191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.181200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.181207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.181226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.191117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.191232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.191258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.191268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.191281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.191302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.201188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.201283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.201308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.201316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.201324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.201343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.211054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.211230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.211256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.211264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.211271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.211291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.221236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.221365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.221390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.221399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.221406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.221425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.231240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.231329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.231354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.231363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.231370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.231389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.241158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.241255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.241282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.241290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.241297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.241317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.251282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.251376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.251401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.251409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.251416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.251436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.261226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.261328] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.261354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.261363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.261369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.261388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.271322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.271421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.271447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.271455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.271462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.271482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.281416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.281514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.281540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.201 [2024-07-12 19:26:30.281561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.201 [2024-07-12 19:26:30.281569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.201 [2024-07-12 19:26:30.281588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.201 qpair failed and we were unable to recover it. 00:30:24.201 [2024-07-12 19:26:30.291381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.201 [2024-07-12 19:26:30.291470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.201 [2024-07-12 19:26:30.291499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.202 [2024-07-12 19:26:30.291510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.202 [2024-07-12 19:26:30.291517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.202 [2024-07-12 19:26:30.291537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.202 qpair failed and we were unable to recover it. 00:30:24.202 [2024-07-12 19:26:30.301415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.202 [2024-07-12 19:26:30.301514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.202 [2024-07-12 19:26:30.301540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.202 [2024-07-12 19:26:30.301549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.202 [2024-07-12 19:26:30.301556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.202 [2024-07-12 19:26:30.301576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.202 qpair failed and we were unable to recover it. 00:30:24.202 [2024-07-12 19:26:30.311504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.202 [2024-07-12 19:26:30.311619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.202 [2024-07-12 19:26:30.311644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.202 [2024-07-12 19:26:30.311653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.202 [2024-07-12 19:26:30.311660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.202 [2024-07-12 19:26:30.311679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.202 qpair failed and we were unable to recover it. 00:30:24.202 [2024-07-12 19:26:30.321467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.202 [2024-07-12 19:26:30.321554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.202 [2024-07-12 19:26:30.321580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.202 [2024-07-12 19:26:30.321588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.202 [2024-07-12 19:26:30.321595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.202 [2024-07-12 19:26:30.321614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.202 qpair failed and we were unable to recover it. 00:30:24.465 [2024-07-12 19:26:30.331493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.465 [2024-07-12 19:26:30.331581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.465 [2024-07-12 19:26:30.331607] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.465 [2024-07-12 19:26:30.331616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.465 [2024-07-12 19:26:30.331623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.465 [2024-07-12 19:26:30.331642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.465 qpair failed and we were unable to recover it. 00:30:24.465 [2024-07-12 19:26:30.341532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.465 [2024-07-12 19:26:30.341638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.465 [2024-07-12 19:26:30.341664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.465 [2024-07-12 19:26:30.341673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.465 [2024-07-12 19:26:30.341680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.465 [2024-07-12 19:26:30.341700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.465 qpair failed and we were unable to recover it. 00:30:24.465 [2024-07-12 19:26:30.351538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.465 [2024-07-12 19:26:30.351633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.465 [2024-07-12 19:26:30.351672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.465 [2024-07-12 19:26:30.351683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.465 [2024-07-12 19:26:30.351691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.465 [2024-07-12 19:26:30.351717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.465 qpair failed and we were unable to recover it. 00:30:24.465 [2024-07-12 19:26:30.361601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.465 [2024-07-12 19:26:30.361699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.465 [2024-07-12 19:26:30.361727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.465 [2024-07-12 19:26:30.361738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.465 [2024-07-12 19:26:30.361745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.465 [2024-07-12 19:26:30.361767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.465 qpair failed and we were unable to recover it. 00:30:24.465 [2024-07-12 19:26:30.371687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.465 [2024-07-12 19:26:30.371788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.465 [2024-07-12 19:26:30.371827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.465 [2024-07-12 19:26:30.371845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.465 [2024-07-12 19:26:30.371853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.465 [2024-07-12 19:26:30.371879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.465 qpair failed and we were unable to recover it. 00:30:24.465 [2024-07-12 19:26:30.381630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.381744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.381782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.381792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.381800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.381825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.391686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.391784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.391823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.391834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.391842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.391868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.401701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.401798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.401837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.401848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.401856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.401881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.411691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.411790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.411829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.411841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.411848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.411874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.421803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.421908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.421941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.421950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.421957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.421981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.431817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.431906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.431935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.431944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.431951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.431973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.441737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.441830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.441869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.441881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.441888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.441915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.451854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.451947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.451976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.451984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.451991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.452013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.461861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.461986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.462019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.462027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.462034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.462055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.471904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.472022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.472048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.472056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.472063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.472083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.481934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.482023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.482049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.482058] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.482065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.482084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.491962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.492065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.492091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.492100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.492107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.492134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.502039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.502144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.502171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.502180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.502187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.502214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.512038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.512136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.466 [2024-07-12 19:26:30.512163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.466 [2024-07-12 19:26:30.512171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.466 [2024-07-12 19:26:30.512178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.466 [2024-07-12 19:26:30.512198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.466 qpair failed and we were unable to recover it. 00:30:24.466 [2024-07-12 19:26:30.522051] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.466 [2024-07-12 19:26:30.522143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.467 [2024-07-12 19:26:30.522170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.467 [2024-07-12 19:26:30.522179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.467 [2024-07-12 19:26:30.522187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.467 [2024-07-12 19:26:30.522207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.467 qpair failed and we were unable to recover it. 00:30:24.467 [2024-07-12 19:26:30.532069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.467 [2024-07-12 19:26:30.532164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.467 [2024-07-12 19:26:30.532191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.467 [2024-07-12 19:26:30.532199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.467 [2024-07-12 19:26:30.532206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.467 [2024-07-12 19:26:30.532225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.467 qpair failed and we were unable to recover it. 00:30:24.467 [2024-07-12 19:26:30.542173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.467 [2024-07-12 19:26:30.542283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.467 [2024-07-12 19:26:30.542309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.467 [2024-07-12 19:26:30.542317] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.467 [2024-07-12 19:26:30.542324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.467 [2024-07-12 19:26:30.542343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.467 qpair failed and we were unable to recover it. 00:30:24.467 [2024-07-12 19:26:30.552115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.467 [2024-07-12 19:26:30.552207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.467 [2024-07-12 19:26:30.552240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.467 [2024-07-12 19:26:30.552249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.467 [2024-07-12 19:26:30.552256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.467 [2024-07-12 19:26:30.552277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.467 qpair failed and we were unable to recover it. 00:30:24.467 [2024-07-12 19:26:30.562202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.467 [2024-07-12 19:26:30.562287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.467 [2024-07-12 19:26:30.562313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.467 [2024-07-12 19:26:30.562321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.467 [2024-07-12 19:26:30.562328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.467 [2024-07-12 19:26:30.562347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.467 qpair failed and we were unable to recover it. 00:30:24.467 [2024-07-12 19:26:30.572201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.467 [2024-07-12 19:26:30.572331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.467 [2024-07-12 19:26:30.572357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.467 [2024-07-12 19:26:30.572366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.467 [2024-07-12 19:26:30.572373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.467 [2024-07-12 19:26:30.572392] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.467 qpair failed and we were unable to recover it. 00:30:24.467 [2024-07-12 19:26:30.582220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.467 [2024-07-12 19:26:30.582323] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.467 [2024-07-12 19:26:30.582350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.467 [2024-07-12 19:26:30.582358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.467 [2024-07-12 19:26:30.582365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.467 [2024-07-12 19:26:30.582384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.467 qpair failed and we were unable to recover it. 00:30:24.467 [2024-07-12 19:26:30.592312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.467 [2024-07-12 19:26:30.592418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.467 [2024-07-12 19:26:30.592444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.467 [2024-07-12 19:26:30.592453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.467 [2024-07-12 19:26:30.592466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.467 [2024-07-12 19:26:30.592486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.467 qpair failed and we were unable to recover it. 00:30:24.730 [2024-07-12 19:26:30.602306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.730 [2024-07-12 19:26:30.602397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.730 [2024-07-12 19:26:30.602424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.730 [2024-07-12 19:26:30.602432] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.730 [2024-07-12 19:26:30.602439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.730 [2024-07-12 19:26:30.602459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.730 qpair failed and we were unable to recover it. 00:30:24.730 [2024-07-12 19:26:30.612333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.730 [2024-07-12 19:26:30.612438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.730 [2024-07-12 19:26:30.612463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.730 [2024-07-12 19:26:30.612472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.730 [2024-07-12 19:26:30.612479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.730 [2024-07-12 19:26:30.612498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.730 qpair failed and we were unable to recover it. 00:30:24.730 [2024-07-12 19:26:30.622382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.730 [2024-07-12 19:26:30.622476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.730 [2024-07-12 19:26:30.622503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.730 [2024-07-12 19:26:30.622512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.730 [2024-07-12 19:26:30.622519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.730 [2024-07-12 19:26:30.622539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.730 qpair failed and we were unable to recover it. 00:30:24.730 [2024-07-12 19:26:30.632281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.730 [2024-07-12 19:26:30.632415] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.730 [2024-07-12 19:26:30.632441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.730 [2024-07-12 19:26:30.632450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.730 [2024-07-12 19:26:30.632456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.730 [2024-07-12 19:26:30.632476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.730 qpair failed and we were unable to recover it. 00:30:24.730 [2024-07-12 19:26:30.642459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.730 [2024-07-12 19:26:30.642592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.642618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.642626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.642633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.642653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.652461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.652548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.652575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.652584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.652591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.652611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.662440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.662539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.662564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.662572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.662580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.662599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.672520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.672614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.672640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.672648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.672655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.672675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.682593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.682685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.682710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.682726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.682733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.682752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.692589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.692686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.692716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.692724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.692731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.692753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.702584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.702685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.702711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.702721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.702727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.702747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.712644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.712733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.712762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.712770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.712777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.712797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.722652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.722743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.722771] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.722780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.722788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.722807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.732591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.732677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.732702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.732710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.732717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.732736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.742785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.742935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.742962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.742970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.742977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.742995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.752685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.752771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.752797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.752806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.752814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.752833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.762754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.762840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.762867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.762875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.762883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.762902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.772777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.772871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.772895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.772910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.772917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.731 [2024-07-12 19:26:30.772935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.731 qpair failed and we were unable to recover it. 00:30:24.731 [2024-07-12 19:26:30.782817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.731 [2024-07-12 19:26:30.782919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.731 [2024-07-12 19:26:30.782942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.731 [2024-07-12 19:26:30.782951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.731 [2024-07-12 19:26:30.782958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.732 [2024-07-12 19:26:30.782976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.732 qpair failed and we were unable to recover it. 00:30:24.732 [2024-07-12 19:26:30.792838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.732 [2024-07-12 19:26:30.792958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.732 [2024-07-12 19:26:30.792981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.732 [2024-07-12 19:26:30.792989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.732 [2024-07-12 19:26:30.792996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.732 [2024-07-12 19:26:30.793013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.732 qpair failed and we were unable to recover it. 00:30:24.732 [2024-07-12 19:26:30.802858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.732 [2024-07-12 19:26:30.802943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.732 [2024-07-12 19:26:30.802964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.732 [2024-07-12 19:26:30.802973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.732 [2024-07-12 19:26:30.802980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.732 [2024-07-12 19:26:30.802997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.732 qpair failed and we were unable to recover it. 00:30:24.732 [2024-07-12 19:26:30.812957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.732 [2024-07-12 19:26:30.813130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.732 [2024-07-12 19:26:30.813152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.732 [2024-07-12 19:26:30.813160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.732 [2024-07-12 19:26:30.813167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.732 [2024-07-12 19:26:30.813184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.732 qpair failed and we were unable to recover it. 00:30:24.732 [2024-07-12 19:26:30.822916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.732 [2024-07-12 19:26:30.823004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.732 [2024-07-12 19:26:30.823024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.732 [2024-07-12 19:26:30.823032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.732 [2024-07-12 19:26:30.823040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.732 [2024-07-12 19:26:30.823057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.732 qpair failed and we were unable to recover it. 00:30:24.732 [2024-07-12 19:26:30.832901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.732 [2024-07-12 19:26:30.832980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.732 [2024-07-12 19:26:30.832999] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.732 [2024-07-12 19:26:30.833007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.732 [2024-07-12 19:26:30.833014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.732 [2024-07-12 19:26:30.833031] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.732 qpair failed and we were unable to recover it. 00:30:24.732 [2024-07-12 19:26:30.842996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.732 [2024-07-12 19:26:30.843082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.732 [2024-07-12 19:26:30.843103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.732 [2024-07-12 19:26:30.843112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.732 [2024-07-12 19:26:30.843118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.732 [2024-07-12 19:26:30.843143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.732 qpair failed and we were unable to recover it. 00:30:24.732 [2024-07-12 19:26:30.853035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.732 [2024-07-12 19:26:30.853118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.732 [2024-07-12 19:26:30.853146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.732 [2024-07-12 19:26:30.853155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.732 [2024-07-12 19:26:30.853161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.732 [2024-07-12 19:26:30.853178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.732 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.863059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.863152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.863176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.994 [2024-07-12 19:26:30.863185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.994 [2024-07-12 19:26:30.863192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.994 [2024-07-12 19:26:30.863209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.994 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.873016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.873082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.873101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.994 [2024-07-12 19:26:30.873108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.994 [2024-07-12 19:26:30.873115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.994 [2024-07-12 19:26:30.873135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.994 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.883103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.883198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.883217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.994 [2024-07-12 19:26:30.883225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.994 [2024-07-12 19:26:30.883232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.994 [2024-07-12 19:26:30.883248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.994 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.893131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.893213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.893231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.994 [2024-07-12 19:26:30.893238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.994 [2024-07-12 19:26:30.893244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.994 [2024-07-12 19:26:30.893259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.994 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.903139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.903231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.903250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.994 [2024-07-12 19:26:30.903258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.994 [2024-07-12 19:26:30.903265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.994 [2024-07-12 19:26:30.903288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.994 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.913146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.913222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.913239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.994 [2024-07-12 19:26:30.913247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.994 [2024-07-12 19:26:30.913253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.994 [2024-07-12 19:26:30.913269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.994 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.923219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.923293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.923310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.994 [2024-07-12 19:26:30.923318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.994 [2024-07-12 19:26:30.923325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.994 [2024-07-12 19:26:30.923340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.994 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.933225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.933303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.933320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.994 [2024-07-12 19:26:30.933327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.994 [2024-07-12 19:26:30.933334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.994 [2024-07-12 19:26:30.933349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.994 qpair failed and we were unable to recover it. 00:30:24.994 [2024-07-12 19:26:30.943275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.994 [2024-07-12 19:26:30.943366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.994 [2024-07-12 19:26:30.943383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:30.943391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:30.943397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:30.943413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:30.953359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:30.953431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:30.953451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:30.953459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:30.953466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:30.953481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:30.963346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:30.963474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:30.963491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:30.963499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:30.963505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:30.963520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:30.973410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:30.973488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:30.973505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:30.973512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:30.973520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:30.973535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:30.983267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:30.983348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:30.983364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:30.983372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:30.983378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:30.983393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:30.993314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:30.993388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:30.993403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:30.993411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:30.993421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:30.993436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.003414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.003491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.003507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.003515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.003521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.003536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.013440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.013523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.013538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.013545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.013552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.013566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.023383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.023465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.023481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.023488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.023495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.023510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.033372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.033443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.033459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.033466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.033472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.033488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.043531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.043610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.043626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.043633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.043639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.043655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.053554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.053635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.053650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.053657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.053664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.053679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.063539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.063620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.063636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.063643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.063650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.063664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.073565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.073642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.073657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.073665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.073671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.073685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.995 [2024-07-12 19:26:31.083615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.995 [2024-07-12 19:26:31.083694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.995 [2024-07-12 19:26:31.083710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.995 [2024-07-12 19:26:31.083718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.995 [2024-07-12 19:26:31.083728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.995 [2024-07-12 19:26:31.083742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.995 qpair failed and we were unable to recover it. 00:30:24.996 [2024-07-12 19:26:31.093671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.996 [2024-07-12 19:26:31.093751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.996 [2024-07-12 19:26:31.093767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.996 [2024-07-12 19:26:31.093774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.996 [2024-07-12 19:26:31.093781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.996 [2024-07-12 19:26:31.093796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.996 qpair failed and we were unable to recover it. 00:30:24.996 [2024-07-12 19:26:31.103688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.996 [2024-07-12 19:26:31.103772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.996 [2024-07-12 19:26:31.103791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.996 [2024-07-12 19:26:31.103799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.996 [2024-07-12 19:26:31.103806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.996 [2024-07-12 19:26:31.103821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.996 qpair failed and we were unable to recover it. 00:30:24.996 [2024-07-12 19:26:31.113727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.996 [2024-07-12 19:26:31.113817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.996 [2024-07-12 19:26:31.113833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.996 [2024-07-12 19:26:31.113840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.996 [2024-07-12 19:26:31.113847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:24.996 [2024-07-12 19:26:31.113861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.996 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.123736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.123811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.123827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.123834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.123840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.123855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.133781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.133866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.133882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.133889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.133895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.133911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.143724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.143850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.143876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.143885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.143892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.143911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.153744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.153819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.153844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.153853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.153860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.153879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.163757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.163833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.163850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.163857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.163866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.163882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.173898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.173976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.173992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.174004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.174011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.174027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.183948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.184027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.184043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.184051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.184057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.184072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.193961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.194049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.194064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.194072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.194078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.194094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.203983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.204082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.204098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.204105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.258 [2024-07-12 19:26:31.204111] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.258 [2024-07-12 19:26:31.204131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.258 qpair failed and we were unable to recover it. 00:30:25.258 [2024-07-12 19:26:31.213999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.258 [2024-07-12 19:26:31.214075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.258 [2024-07-12 19:26:31.214091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.258 [2024-07-12 19:26:31.214098] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.214105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.214120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.224049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.224133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.224149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.224156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.224163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.224178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.234006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.234072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.234087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.234094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.234101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.234115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.244069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.244151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.244167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.244174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.244181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.244196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.254111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.254190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.254206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.254213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.254219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.254234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.264126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.264210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.264229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.264237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.264243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.264258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.274113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.274193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.274208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.274215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.274221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.274237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.284187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.284253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.284269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.284276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.284282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.284296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.294116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.294249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.294266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.294273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.294279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.294295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.304264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.304350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.304368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.304375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.304381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.304401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.314241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.259 [2024-07-12 19:26:31.314310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.259 [2024-07-12 19:26:31.314325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.259 [2024-07-12 19:26:31.314332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.259 [2024-07-12 19:26:31.314338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.259 [2024-07-12 19:26:31.314354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.259 qpair failed and we were unable to recover it. 00:30:25.259 [2024-07-12 19:26:31.324310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.260 [2024-07-12 19:26:31.324386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.260 [2024-07-12 19:26:31.324401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.260 [2024-07-12 19:26:31.324408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.260 [2024-07-12 19:26:31.324415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.260 [2024-07-12 19:26:31.324430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.260 qpair failed and we were unable to recover it. 00:30:25.260 [2024-07-12 19:26:31.334349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.260 [2024-07-12 19:26:31.334430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.260 [2024-07-12 19:26:31.334445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.260 [2024-07-12 19:26:31.334453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.260 [2024-07-12 19:26:31.334460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.260 [2024-07-12 19:26:31.334475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.260 qpair failed and we were unable to recover it. 00:30:25.260 [2024-07-12 19:26:31.344367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.260 [2024-07-12 19:26:31.344445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.260 [2024-07-12 19:26:31.344460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.260 [2024-07-12 19:26:31.344468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.260 [2024-07-12 19:26:31.344474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.260 [2024-07-12 19:26:31.344490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.260 qpair failed and we were unable to recover it. 00:30:25.260 [2024-07-12 19:26:31.354356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.260 [2024-07-12 19:26:31.354433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.260 [2024-07-12 19:26:31.354451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.260 [2024-07-12 19:26:31.354458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.260 [2024-07-12 19:26:31.354464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.260 [2024-07-12 19:26:31.354479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.260 qpair failed and we were unable to recover it. 00:30:25.260 [2024-07-12 19:26:31.364461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.260 [2024-07-12 19:26:31.364535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.260 [2024-07-12 19:26:31.364550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.260 [2024-07-12 19:26:31.364558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.260 [2024-07-12 19:26:31.364564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.260 [2024-07-12 19:26:31.364579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.260 qpair failed and we were unable to recover it. 00:30:25.260 [2024-07-12 19:26:31.374465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.260 [2024-07-12 19:26:31.374542] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.260 [2024-07-12 19:26:31.374557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.260 [2024-07-12 19:26:31.374564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.260 [2024-07-12 19:26:31.374571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.260 [2024-07-12 19:26:31.374585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.260 qpair failed and we were unable to recover it. 00:30:25.260 [2024-07-12 19:26:31.384575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.260 [2024-07-12 19:26:31.384663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.260 [2024-07-12 19:26:31.384678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.260 [2024-07-12 19:26:31.384686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.260 [2024-07-12 19:26:31.384692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.260 [2024-07-12 19:26:31.384707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.260 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.394464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.394540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.394555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.394563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.394573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.394588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.404475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.404569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.404586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.404594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.404600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.404615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.414567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.414644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.414660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.414667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.414674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.414688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.424591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.424669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.424684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.424691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.424698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.424713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.434569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.434647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.434661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.434668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.434674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.434688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.444643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.444715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.444730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.444737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.444744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.444759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.454679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.454760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.454775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.454783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.454790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.454804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.464602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.464711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.464727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.464735] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.464742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.464756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.474724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.474846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.474862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.474869] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.474875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.474890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.484816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.484892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.484907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.484914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.484925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.484940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.494810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.522 [2024-07-12 19:26:31.494891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.522 [2024-07-12 19:26:31.494907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.522 [2024-07-12 19:26:31.494914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.522 [2024-07-12 19:26:31.494921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.522 [2024-07-12 19:26:31.494936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.522 qpair failed and we were unable to recover it. 00:30:25.522 [2024-07-12 19:26:31.504775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.504848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.504863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.504870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.504877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.504892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.514808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.514881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.514896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.514903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.514911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.514925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.524857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.524935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.524950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.524957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.524964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.524978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.534895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.534972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.534987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.534994] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.535002] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.535016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.544925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.545009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.545025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.545033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.545039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.545054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.554945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.555015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.555030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.555036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.555043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.555057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.564966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.565041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.565057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.565064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.565070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.565084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.575014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.575090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.575105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.575116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.575127] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.575143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.584997] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.585076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.585092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.585099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.585106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.585120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.595039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.595111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.595130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.595138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.595143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.595160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.605081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.605189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.605206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.605213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.605219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.605234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.615132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.615208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.615234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.615242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.615249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.615265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.625105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.625181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.625197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.625204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.625210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.625226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.635019] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.635142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.635157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.635165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.523 [2024-07-12 19:26:31.635172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.523 [2024-07-12 19:26:31.635187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.523 qpair failed and we were unable to recover it. 00:30:25.523 [2024-07-12 19:26:31.645217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.523 [2024-07-12 19:26:31.645292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.523 [2024-07-12 19:26:31.645308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.523 [2024-07-12 19:26:31.645315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.524 [2024-07-12 19:26:31.645321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.524 [2024-07-12 19:26:31.645336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.524 qpair failed and we were unable to recover it. 00:30:25.786 [2024-07-12 19:26:31.655229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.786 [2024-07-12 19:26:31.655307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.786 [2024-07-12 19:26:31.655322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.786 [2024-07-12 19:26:31.655329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.786 [2024-07-12 19:26:31.655335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.786 [2024-07-12 19:26:31.655351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.786 qpair failed and we were unable to recover it. 00:30:25.786 [2024-07-12 19:26:31.665213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.786 [2024-07-12 19:26:31.665291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.786 [2024-07-12 19:26:31.665312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.665320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.665326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.665340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.675147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.675268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.675284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.675291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.675298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.675313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.685355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.685427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.685442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.685449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.685455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.685470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.695386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.695473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.695488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.695495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.695501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.695515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.705344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.705423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.705438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.705445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.705452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.705469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.715360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.715433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.715448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.715456] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.715462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.715476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.725454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.725523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.725538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.725545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.725551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.725566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.735456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.735558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.735574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.735581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.735587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.735600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.745418] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.745496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.745511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.745518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.745524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.745538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.755371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.755438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.755457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.755464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.755470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.755484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.765553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.787 [2024-07-12 19:26:31.765633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.787 [2024-07-12 19:26:31.765648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.787 [2024-07-12 19:26:31.765655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.787 [2024-07-12 19:26:31.765662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.787 [2024-07-12 19:26:31.765677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.787 qpair failed and we were unable to recover it. 00:30:25.787 [2024-07-12 19:26:31.775598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.775675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.775690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.775697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.775703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.775718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.785548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.785622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.785638] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.785645] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.785651] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.785665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.795596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.795665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.795680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.795687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.795693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.795721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.805634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.805710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.805725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.805732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.805740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.805754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.815672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.815748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.815765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.815772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.815778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.815793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.825569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.825645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.825661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.825668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.825674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.825689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.835750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.835820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.835835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.835842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.835848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.835863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.845729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.845811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.845836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.845845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.845851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.845871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.855715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.855781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.855799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.855806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.855813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.855829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.865813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.865893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.865918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.865927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.865934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.865953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.788 [2024-07-12 19:26:31.875704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.788 [2024-07-12 19:26:31.875776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.788 [2024-07-12 19:26:31.875800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.788 [2024-07-12 19:26:31.875809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.788 [2024-07-12 19:26:31.875815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.788 [2024-07-12 19:26:31.875834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.788 qpair failed and we were unable to recover it. 00:30:25.789 [2024-07-12 19:26:31.885889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.789 [2024-07-12 19:26:31.885992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.789 [2024-07-12 19:26:31.886009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.789 [2024-07-12 19:26:31.886017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.789 [2024-07-12 19:26:31.886028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.789 [2024-07-12 19:26:31.886044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.789 qpair failed and we were unable to recover it. 00:30:25.789 [2024-07-12 19:26:31.895914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.789 [2024-07-12 19:26:31.896026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.789 [2024-07-12 19:26:31.896042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.789 [2024-07-12 19:26:31.896051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.789 [2024-07-12 19:26:31.896057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.789 [2024-07-12 19:26:31.896073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.789 qpair failed and we were unable to recover it. 00:30:25.789 [2024-07-12 19:26:31.905914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.789 [2024-07-12 19:26:31.906021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.789 [2024-07-12 19:26:31.906037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.789 [2024-07-12 19:26:31.906044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.789 [2024-07-12 19:26:31.906050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:25.789 [2024-07-12 19:26:31.906065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:25.789 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-12 19:26:31.915984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.051 [2024-07-12 19:26:31.916052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.051 [2024-07-12 19:26:31.916067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.051 [2024-07-12 19:26:31.916074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.051 [2024-07-12 19:26:31.916080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.051 [2024-07-12 19:26:31.916094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-12 19:26:31.925958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.051 [2024-07-12 19:26:31.926031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.051 [2024-07-12 19:26:31.926046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.051 [2024-07-12 19:26:31.926054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.051 [2024-07-12 19:26:31.926060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.051 [2024-07-12 19:26:31.926074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-12 19:26:31.935955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.051 [2024-07-12 19:26:31.936043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.051 [2024-07-12 19:26:31.936059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.051 [2024-07-12 19:26:31.936067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.051 [2024-07-12 19:26:31.936073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.051 [2024-07-12 19:26:31.936087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-12 19:26:31.945998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.051 [2024-07-12 19:26:31.946066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.051 [2024-07-12 19:26:31.946082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.051 [2024-07-12 19:26:31.946089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.051 [2024-07-12 19:26:31.946095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.051 [2024-07-12 19:26:31.946109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-12 19:26:31.956008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.051 [2024-07-12 19:26:31.956081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.051 [2024-07-12 19:26:31.956096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.051 [2024-07-12 19:26:31.956103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.051 [2024-07-12 19:26:31.956109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.051 [2024-07-12 19:26:31.956128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-12 19:26:31.966041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.051 [2024-07-12 19:26:31.966107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.051 [2024-07-12 19:26:31.966127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.051 [2024-07-12 19:26:31.966134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:31.966140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:31.966155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:31.976073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:31.976147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:31.976163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:31.976173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:31.976179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:31.976194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:31.986110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:31.986300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:31.986316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:31.986323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:31.986329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:31.986344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:31.996132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:31.996202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:31.996218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:31.996225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:31.996231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:31.996246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:32.006055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:32.006124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:32.006140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:32.006147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:32.006153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:32.006168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:32.016179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:32.016247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:32.016262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:32.016269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:32.016275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:32.016290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:32.026218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:32.026291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:32.026307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:32.026314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:32.026320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:32.026335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:32.036136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:32.036204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:32.036220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:32.036227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:32.036234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:32.036250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:32.046264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:32.046330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:32.046346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:32.046354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:32.046360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:32.046374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:32.056299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:32.056368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:32.056383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:32.056391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:32.056397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:32.056412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:32.066321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.052 [2024-07-12 19:26:32.066396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.052 [2024-07-12 19:26:32.066411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.052 [2024-07-12 19:26:32.066422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.052 [2024-07-12 19:26:32.066428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.052 [2024-07-12 19:26:32.066442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-12 19:26:32.076323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.076389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.076404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.076411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.076417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.076432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.086392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.086464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.086479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.086486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.086492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.086507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.096404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.096471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.096485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.096493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.096499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.096513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.106428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.106503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.106518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.106525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.106531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.106545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.116461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.116529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.116544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.116552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.116558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.116572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.126372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.126442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.126458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.126465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.126471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.126485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.136531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.136601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.136617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.136624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.136630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.136644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.146552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.146627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.146642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.146649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.146655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.146669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.156560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.156643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.156662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.156669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.156675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.156690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.166592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.166664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.166679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.166686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.166692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.166706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-12 19:26:32.176615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.053 [2024-07-12 19:26:32.176684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.053 [2024-07-12 19:26:32.176699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.053 [2024-07-12 19:26:32.176706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.053 [2024-07-12 19:26:32.176712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.053 [2024-07-12 19:26:32.176726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.186573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.186648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.186663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.186670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.186677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.186691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.196665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.196731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.196747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.196754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.196760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.196778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.206721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.206785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.206800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.206807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.206813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.206828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.216772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.216921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.216946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.216955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.216961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.216980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.226753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.226832] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.226857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.226866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.226872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.226891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.236750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.236841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.236866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.236874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.236881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.236901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.246823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.246896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.246925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.246934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.246940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.246960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.256860] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.256934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.256959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.256968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.256974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.256994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.266852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.266928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.266948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.266956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.266962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.266979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.276909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.276975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.276991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.276998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.316 [2024-07-12 19:26:32.277004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.316 [2024-07-12 19:26:32.277020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.316 qpair failed and we were unable to recover it. 00:30:26.316 [2024-07-12 19:26:32.286938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.316 [2024-07-12 19:26:32.287022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.316 [2024-07-12 19:26:32.287038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.316 [2024-07-12 19:26:32.287045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.287056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.287071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.296938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.297055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.297070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.297078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.297085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.297099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.306970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.307045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.307061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.307068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.307074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.307088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.316983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.317045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.317061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.317068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.317074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.317088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.327044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.327116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.327135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.327142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.327148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.327163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.337075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.337153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.337168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.337176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.337183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.337197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.347082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.347157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.347174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.347181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.347188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.347203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.357135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.357210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.357226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.357233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.357240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.357255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.367020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.367086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.367103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.367110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.367117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.367136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.377155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.377233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.377249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.377260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.377266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.317 [2024-07-12 19:26:32.377281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.317 qpair failed and we were unable to recover it. 00:30:26.317 [2024-07-12 19:26:32.387134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.317 [2024-07-12 19:26:32.387204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.317 [2024-07-12 19:26:32.387220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.317 [2024-07-12 19:26:32.387228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.317 [2024-07-12 19:26:32.387234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.318 [2024-07-12 19:26:32.387249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.318 qpair failed and we were unable to recover it. 00:30:26.318 [2024-07-12 19:26:32.397201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.318 [2024-07-12 19:26:32.397269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.318 [2024-07-12 19:26:32.397285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.318 [2024-07-12 19:26:32.397292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.318 [2024-07-12 19:26:32.397299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.318 [2024-07-12 19:26:32.397313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.318 qpair failed and we were unable to recover it. 00:30:26.318 [2024-07-12 19:26:32.407253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.318 [2024-07-12 19:26:32.407345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.318 [2024-07-12 19:26:32.407360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.318 [2024-07-12 19:26:32.407368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.318 [2024-07-12 19:26:32.407374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.318 [2024-07-12 19:26:32.407389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.318 qpair failed and we were unable to recover it. 00:30:26.318 [2024-07-12 19:26:32.417259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.318 [2024-07-12 19:26:32.417331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.318 [2024-07-12 19:26:32.417346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.318 [2024-07-12 19:26:32.417353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.318 [2024-07-12 19:26:32.417360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.318 [2024-07-12 19:26:32.417375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.318 qpair failed and we were unable to recover it. 00:30:26.318 [2024-07-12 19:26:32.427284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.318 [2024-07-12 19:26:32.427351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.318 [2024-07-12 19:26:32.427366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.318 [2024-07-12 19:26:32.427373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.318 [2024-07-12 19:26:32.427379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.318 [2024-07-12 19:26:32.427393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.318 qpair failed and we were unable to recover it. 00:30:26.318 [2024-07-12 19:26:32.437338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.318 [2024-07-12 19:26:32.437451] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.318 [2024-07-12 19:26:32.437466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.318 [2024-07-12 19:26:32.437473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.318 [2024-07-12 19:26:32.437479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.318 [2024-07-12 19:26:32.437494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.318 qpair failed and we were unable to recover it. 00:30:26.579 [2024-07-12 19:26:32.447360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.579 [2024-07-12 19:26:32.447433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.579 [2024-07-12 19:26:32.447449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.579 [2024-07-12 19:26:32.447457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.579 [2024-07-12 19:26:32.447463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.447479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.457370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.457466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.457482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.457490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.457496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.457511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.467391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.467460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.467475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.467488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.467495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.467510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.477425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.477519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.477534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.477542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.477548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.477563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.487443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.487510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.487526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.487533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.487540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.487554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.497549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.497698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.497713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.497721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.497727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.497741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.507509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.507610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.507626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.507633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.507639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.507655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.517519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.517590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.517606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.517613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.517619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.517633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.527595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.527663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.527679] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.527686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.527692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.527707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.537631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.537700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.537715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.537723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.537729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.537744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.547498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.547575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.547590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.547598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.547605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.547619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.557650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.557717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.557736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.557744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.557751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.557765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.567677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.567772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.567788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.567795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.567801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.567816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.577687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.577758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.577773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.577780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.577787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.577801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.580 qpair failed and we were unable to recover it. 00:30:26.580 [2024-07-12 19:26:32.587609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.580 [2024-07-12 19:26:32.587688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.580 [2024-07-12 19:26:32.587703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.580 [2024-07-12 19:26:32.587710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.580 [2024-07-12 19:26:32.587717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.580 [2024-07-12 19:26:32.587732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.597795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.597911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.597926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.597934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.597940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.597958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.607807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.607890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.607916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.607924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.607931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.607951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.617796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.617873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.617898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.617908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.617914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.617934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.627788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.627863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.627880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.627887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.627894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.627910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.637855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.637924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.637939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.637946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.637953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.637969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.647873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.647943] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.647964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.647971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.647978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.647993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.657944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.658041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.658057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.658064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.658070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.658085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.668006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.668161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.668177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.668184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.668190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.668205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.677981] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.678043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.678057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.678065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.678071] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.678085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.687977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.688057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.688072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.688080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.688091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.688108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.698058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.698134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.698150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.698157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.698164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.698179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.581 [2024-07-12 19:26:32.707983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.581 [2024-07-12 19:26:32.708062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.581 [2024-07-12 19:26:32.708078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.581 [2024-07-12 19:26:32.708085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.581 [2024-07-12 19:26:32.708091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.581 [2024-07-12 19:26:32.708107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.581 qpair failed and we were unable to recover it. 00:30:26.844 [2024-07-12 19:26:32.718056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.844 [2024-07-12 19:26:32.718146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.844 [2024-07-12 19:26:32.718163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.844 [2024-07-12 19:26:32.718170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.844 [2024-07-12 19:26:32.718176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.844 [2024-07-12 19:26:32.718191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.844 qpair failed and we were unable to recover it. 00:30:26.844 [2024-07-12 19:26:32.728210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.844 [2024-07-12 19:26:32.728306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.844 [2024-07-12 19:26:32.728321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.844 [2024-07-12 19:26:32.728329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.844 [2024-07-12 19:26:32.728335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.844 [2024-07-12 19:26:32.728350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.844 qpair failed and we were unable to recover it. 00:30:26.844 [2024-07-12 19:26:32.738162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.844 [2024-07-12 19:26:32.738287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.844 [2024-07-12 19:26:32.738302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.844 [2024-07-12 19:26:32.738311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.844 [2024-07-12 19:26:32.738317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.844 [2024-07-12 19:26:32.738331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.844 qpair failed and we were unable to recover it. 00:30:26.844 [2024-07-12 19:26:32.748141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.844 [2024-07-12 19:26:32.748216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.844 [2024-07-12 19:26:32.748231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.844 [2024-07-12 19:26:32.748238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.844 [2024-07-12 19:26:32.748245] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.844 [2024-07-12 19:26:32.748260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.844 qpair failed and we were unable to recover it. 00:30:26.844 [2024-07-12 19:26:32.758061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.844 [2024-07-12 19:26:32.758135] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.844 [2024-07-12 19:26:32.758151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.844 [2024-07-12 19:26:32.758158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.844 [2024-07-12 19:26:32.758165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.844 [2024-07-12 19:26:32.758180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.844 qpair failed and we were unable to recover it. 00:30:26.844 [2024-07-12 19:26:32.768205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.844 [2024-07-12 19:26:32.768272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.844 [2024-07-12 19:26:32.768287] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.844 [2024-07-12 19:26:32.768295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.768301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.768316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.778178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.778246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.778262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.778269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.778280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.778295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.788273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.788344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.788361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.788368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.788374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.788391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.798313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.798381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.798397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.798405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.798411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.798426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.808351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.808419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.808435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.808442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.808448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.808463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.818348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.818418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.818434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.818441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.818448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.818463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.828361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.828431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.828447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.828454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.828460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.828475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.838336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.838404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.838420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.838427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.838433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.838447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.848311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.848424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.848440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.848448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.848454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.848468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.858384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.858452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.858468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.858475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.858481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.858496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.868499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.868597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.868613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.868624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.868630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.868645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.878503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.878572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.878591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.878599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.878606] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.878621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.888564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.888658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.888675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.888683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.888689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.888704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.898550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.898620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.898635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.898642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.898648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.845 [2024-07-12 19:26:32.898663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.845 qpair failed and we were unable to recover it. 00:30:26.845 [2024-07-12 19:26:32.908533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.845 [2024-07-12 19:26:32.908608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.845 [2024-07-12 19:26:32.908623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.845 [2024-07-12 19:26:32.908630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.845 [2024-07-12 19:26:32.908637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.846 [2024-07-12 19:26:32.908652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.846 qpair failed and we were unable to recover it. 00:30:26.846 [2024-07-12 19:26:32.918613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.846 [2024-07-12 19:26:32.918680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.846 [2024-07-12 19:26:32.918696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.846 [2024-07-12 19:26:32.918703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.846 [2024-07-12 19:26:32.918710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.846 [2024-07-12 19:26:32.918724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.846 qpair failed and we were unable to recover it. 00:30:26.846 [2024-07-12 19:26:32.928634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.846 [2024-07-12 19:26:32.928704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.846 [2024-07-12 19:26:32.928719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.846 [2024-07-12 19:26:32.928726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.846 [2024-07-12 19:26:32.928733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.846 [2024-07-12 19:26:32.928747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.846 qpair failed and we were unable to recover it. 00:30:26.846 [2024-07-12 19:26:32.938655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.846 [2024-07-12 19:26:32.938726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.846 [2024-07-12 19:26:32.938743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.846 [2024-07-12 19:26:32.938751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.846 [2024-07-12 19:26:32.938757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.846 [2024-07-12 19:26:32.938773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.846 qpair failed and we were unable to recover it. 00:30:26.846 [2024-07-12 19:26:32.948693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.846 [2024-07-12 19:26:32.948775] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.846 [2024-07-12 19:26:32.948801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.846 [2024-07-12 19:26:32.948810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.846 [2024-07-12 19:26:32.948817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.846 [2024-07-12 19:26:32.948836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.846 qpair failed and we were unable to recover it. 00:30:26.846 [2024-07-12 19:26:32.958710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.846 [2024-07-12 19:26:32.958783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.846 [2024-07-12 19:26:32.958813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.846 [2024-07-12 19:26:32.958822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.846 [2024-07-12 19:26:32.958829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.846 [2024-07-12 19:26:32.958849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.846 qpair failed and we were unable to recover it. 00:30:26.846 [2024-07-12 19:26:32.968783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.846 [2024-07-12 19:26:32.968855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.846 [2024-07-12 19:26:32.968880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.846 [2024-07-12 19:26:32.968890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.846 [2024-07-12 19:26:32.968897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:26.846 [2024-07-12 19:26:32.968917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:26.846 qpair failed and we were unable to recover it. 00:30:27.107 [2024-07-12 19:26:32.978783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.107 [2024-07-12 19:26:32.978900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.107 [2024-07-12 19:26:32.978917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.107 [2024-07-12 19:26:32.978925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.107 [2024-07-12 19:26:32.978931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.107 [2024-07-12 19:26:32.978947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.107 qpair failed and we were unable to recover it. 00:30:27.107 [2024-07-12 19:26:32.988823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.107 [2024-07-12 19:26:32.988933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.107 [2024-07-12 19:26:32.988949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.107 [2024-07-12 19:26:32.988956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.107 [2024-07-12 19:26:32.988962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.107 [2024-07-12 19:26:32.988977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.107 qpair failed and we were unable to recover it. 00:30:27.107 [2024-07-12 19:26:32.998839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.107 [2024-07-12 19:26:32.998905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.107 [2024-07-12 19:26:32.998920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.107 [2024-07-12 19:26:32.998927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.107 [2024-07-12 19:26:32.998934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.107 [2024-07-12 19:26:32.998954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.107 qpair failed and we were unable to recover it. 00:30:27.107 [2024-07-12 19:26:33.008885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.107 [2024-07-12 19:26:33.008999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.107 [2024-07-12 19:26:33.009015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.107 [2024-07-12 19:26:33.009022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.107 [2024-07-12 19:26:33.009029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.107 [2024-07-12 19:26:33.009043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.018918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.018994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.019009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.019016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.019023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.019038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.028947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.029052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.029068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.029075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.029082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.029096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.038940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.039007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.039022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.039030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.039036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.039051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.048946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.049012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.049032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.049039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.049045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.049061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.058977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.059048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.059063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.059070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.059076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.059092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.069026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.069099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.069115] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.069127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.069134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.069149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.079003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.079073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.079088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.079095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.079103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.079117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.089109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.089183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.089198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.089206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.089219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.089234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.099093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.099169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.099185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.099192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.099199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.099214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.109059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.109138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.109154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.109163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.109172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.109188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.119162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.119253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.119268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.119275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.119282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.119297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.129170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.129247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.129262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.129269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.129276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.129290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.139255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.139329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.139345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.139352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.139358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.139373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.149245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.108 [2024-07-12 19:26:33.149317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.108 [2024-07-12 19:26:33.149333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.108 [2024-07-12 19:26:33.149340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.108 [2024-07-12 19:26:33.149346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.108 [2024-07-12 19:26:33.149362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.108 qpair failed and we were unable to recover it. 00:30:27.108 [2024-07-12 19:26:33.159215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.109 [2024-07-12 19:26:33.159284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.109 [2024-07-12 19:26:33.159299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.109 [2024-07-12 19:26:33.159306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.109 [2024-07-12 19:26:33.159312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.109 [2024-07-12 19:26:33.159327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.109 qpair failed and we were unable to recover it. 00:30:27.109 [2024-07-12 19:26:33.169267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.109 [2024-07-12 19:26:33.169347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.109 [2024-07-12 19:26:33.169362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.109 [2024-07-12 19:26:33.169369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.109 [2024-07-12 19:26:33.169375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.109 [2024-07-12 19:26:33.169391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.109 qpair failed and we were unable to recover it. 00:30:27.109 [2024-07-12 19:26:33.179351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.109 [2024-07-12 19:26:33.179420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.109 [2024-07-12 19:26:33.179436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.109 [2024-07-12 19:26:33.179443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.109 [2024-07-12 19:26:33.179453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.109 [2024-07-12 19:26:33.179469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.109 qpair failed and we were unable to recover it. 00:30:27.109 [2024-07-12 19:26:33.189379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.109 [2024-07-12 19:26:33.189471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.109 [2024-07-12 19:26:33.189487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.109 [2024-07-12 19:26:33.189494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.109 [2024-07-12 19:26:33.189500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.109 [2024-07-12 19:26:33.189515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.109 qpair failed and we were unable to recover it. 00:30:27.109 [2024-07-12 19:26:33.199358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.109 [2024-07-12 19:26:33.199447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.109 [2024-07-12 19:26:33.199464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.109 [2024-07-12 19:26:33.199471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.109 [2024-07-12 19:26:33.199478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.109 [2024-07-12 19:26:33.199492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.109 qpair failed and we were unable to recover it. 00:30:27.109 [2024-07-12 19:26:33.209293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.109 [2024-07-12 19:26:33.209406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.109 [2024-07-12 19:26:33.209422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.109 [2024-07-12 19:26:33.209429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.109 [2024-07-12 19:26:33.209436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.109 [2024-07-12 19:26:33.209451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.109 qpair failed and we were unable to recover it. 00:30:27.109 [2024-07-12 19:26:33.219453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.109 [2024-07-12 19:26:33.219521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.109 [2024-07-12 19:26:33.219537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.109 [2024-07-12 19:26:33.219544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.109 [2024-07-12 19:26:33.219550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.109 [2024-07-12 19:26:33.219565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.109 qpair failed and we were unable to recover it. 00:30:27.109 [2024-07-12 19:26:33.229444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.109 [2024-07-12 19:26:33.229519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.109 [2024-07-12 19:26:33.229535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.109 [2024-07-12 19:26:33.229542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.109 [2024-07-12 19:26:33.229549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.109 [2024-07-12 19:26:33.229564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.109 qpair failed and we were unable to recover it. 00:30:27.371 [2024-07-12 19:26:33.239458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.371 [2024-07-12 19:26:33.239562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.371 [2024-07-12 19:26:33.239577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.371 [2024-07-12 19:26:33.239585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.371 [2024-07-12 19:26:33.239591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.371 [2024-07-12 19:26:33.239607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.371 qpair failed and we were unable to recover it. 00:30:27.371 [2024-07-12 19:26:33.249490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.371 [2024-07-12 19:26:33.249559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.371 [2024-07-12 19:26:33.249574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.371 [2024-07-12 19:26:33.249581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.371 [2024-07-12 19:26:33.249588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.371 [2024-07-12 19:26:33.249602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.371 qpair failed and we were unable to recover it. 00:30:27.371 [2024-07-12 19:26:33.259561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.371 [2024-07-12 19:26:33.259660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.371 [2024-07-12 19:26:33.259675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.371 [2024-07-12 19:26:33.259683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.371 [2024-07-12 19:26:33.259689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.371 [2024-07-12 19:26:33.259703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.371 qpair failed and we were unable to recover it. 00:30:27.371 [2024-07-12 19:26:33.269568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.371 [2024-07-12 19:26:33.269639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.371 [2024-07-12 19:26:33.269654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.371 [2024-07-12 19:26:33.269665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.371 [2024-07-12 19:26:33.269672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.371 [2024-07-12 19:26:33.269687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.371 qpair failed and we were unable to recover it. 00:30:27.371 [2024-07-12 19:26:33.279600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.371 [2024-07-12 19:26:33.279669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.371 [2024-07-12 19:26:33.279684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.371 [2024-07-12 19:26:33.279691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.371 [2024-07-12 19:26:33.279698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.371 [2024-07-12 19:26:33.279713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.371 qpair failed and we were unable to recover it. 00:30:27.371 [2024-07-12 19:26:33.289555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.371 [2024-07-12 19:26:33.289616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.371 [2024-07-12 19:26:33.289630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.371 [2024-07-12 19:26:33.289637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.371 [2024-07-12 19:26:33.289644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.371 [2024-07-12 19:26:33.289658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.371 qpair failed and we were unable to recover it. 00:30:27.371 [2024-07-12 19:26:33.299523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.299591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.299606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.299613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.299620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.299635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.309681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.309778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.309794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.309801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.309807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.309822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.319699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.319768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.319784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.319791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.319797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.319812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.329599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.329671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.329686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.329693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.329700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.329715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.339765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.339834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.339849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.339856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.339862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.339878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.349757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.349838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.349863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.349872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.349880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.349899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.359789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.359873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.359902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.359911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.359918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.359938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.369809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.369891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.369916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.369925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.369931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.369951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.379848] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.379919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.379935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.379943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.379949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.379965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.389876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.389948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.389964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.389971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.389977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.389993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.399957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.400024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.400040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.400047] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.400053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.400073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.410004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.410120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.410141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.410148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.410155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.410170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.419844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.419917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.419932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.419939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.419946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.419962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.429976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.372 [2024-07-12 19:26:33.430049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.372 [2024-07-12 19:26:33.430065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.372 [2024-07-12 19:26:33.430072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.372 [2024-07-12 19:26:33.430078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.372 [2024-07-12 19:26:33.430093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.372 qpair failed and we were unable to recover it. 00:30:27.372 [2024-07-12 19:26:33.439999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.373 [2024-07-12 19:26:33.440072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.373 [2024-07-12 19:26:33.440088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.373 [2024-07-12 19:26:33.440095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.373 [2024-07-12 19:26:33.440103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.373 [2024-07-12 19:26:33.440119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.373 qpair failed and we were unable to recover it. 00:30:27.373 [2024-07-12 19:26:33.450036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.373 [2024-07-12 19:26:33.450107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.373 [2024-07-12 19:26:33.450131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.373 [2024-07-12 19:26:33.450139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.373 [2024-07-12 19:26:33.450146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.373 [2024-07-12 19:26:33.450160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.373 qpair failed and we were unable to recover it. 00:30:27.373 [2024-07-12 19:26:33.460112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.373 [2024-07-12 19:26:33.460185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.373 [2024-07-12 19:26:33.460201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.373 [2024-07-12 19:26:33.460208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.373 [2024-07-12 19:26:33.460215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.373 [2024-07-12 19:26:33.460230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.373 qpair failed and we were unable to recover it. 00:30:27.373 [2024-07-12 19:26:33.470079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.373 [2024-07-12 19:26:33.470151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.373 [2024-07-12 19:26:33.470167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.373 [2024-07-12 19:26:33.470173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.373 [2024-07-12 19:26:33.470180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.373 [2024-07-12 19:26:33.470195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.373 qpair failed and we were unable to recover it. 00:30:27.373 [2024-07-12 19:26:33.480097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.373 [2024-07-12 19:26:33.480165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.373 [2024-07-12 19:26:33.480180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.373 [2024-07-12 19:26:33.480187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.373 [2024-07-12 19:26:33.480194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.373 [2024-07-12 19:26:33.480209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.373 qpair failed and we were unable to recover it. 00:30:27.373 [2024-07-12 19:26:33.490093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.373 [2024-07-12 19:26:33.490172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.373 [2024-07-12 19:26:33.490187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.373 [2024-07-12 19:26:33.490196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.373 [2024-07-12 19:26:33.490202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.373 [2024-07-12 19:26:33.490220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.373 qpair failed and we were unable to recover it. 00:30:27.634 [2024-07-12 19:26:33.500075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.634 [2024-07-12 19:26:33.500232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.634 [2024-07-12 19:26:33.500248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.634 [2024-07-12 19:26:33.500255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.634 [2024-07-12 19:26:33.500262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.634 [2024-07-12 19:26:33.500276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.634 qpair failed and we were unable to recover it. 00:30:27.634 [2024-07-12 19:26:33.510226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.634 [2024-07-12 19:26:33.510332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.634 [2024-07-12 19:26:33.510348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.634 [2024-07-12 19:26:33.510355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.634 [2024-07-12 19:26:33.510361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.634 [2024-07-12 19:26:33.510376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.634 qpair failed and we were unable to recover it. 00:30:27.634 [2024-07-12 19:26:33.520220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.634 [2024-07-12 19:26:33.520293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.634 [2024-07-12 19:26:33.520309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.634 [2024-07-12 19:26:33.520316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.634 [2024-07-12 19:26:33.520323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.634 [2024-07-12 19:26:33.520339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.634 qpair failed and we were unable to recover it. 00:30:27.634 [2024-07-12 19:26:33.530233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.634 [2024-07-12 19:26:33.530310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.634 [2024-07-12 19:26:33.530326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.634 [2024-07-12 19:26:33.530333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.634 [2024-07-12 19:26:33.530340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.634 [2024-07-12 19:26:33.530355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.634 qpair failed and we were unable to recover it. 00:30:27.634 [2024-07-12 19:26:33.540278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.634 [2024-07-12 19:26:33.540353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.634 [2024-07-12 19:26:33.540368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.634 [2024-07-12 19:26:33.540375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.634 [2024-07-12 19:26:33.540382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.634 [2024-07-12 19:26:33.540397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.634 qpair failed and we were unable to recover it. 00:30:27.634 [2024-07-12 19:26:33.550187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.634 [2024-07-12 19:26:33.550259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.634 [2024-07-12 19:26:33.550275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.634 [2024-07-12 19:26:33.550282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.634 [2024-07-12 19:26:33.550289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.634 [2024-07-12 19:26:33.550304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.634 qpair failed and we were unable to recover it. 00:30:27.634 [2024-07-12 19:26:33.560390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.560467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.560482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.560490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.560497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.560513] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.570359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.570437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.570453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.570460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.570467] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.570481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.580401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.580467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.580482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.580489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.580500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.580515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.590421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.590497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.590512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.590519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.590527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.590541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.600427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.600531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.600547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.600554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.600561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.600576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.610448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.610514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.610530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.610537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.610543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.610558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.620470] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.620549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.620565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.620573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.620580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.620595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.630545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.630635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.630650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.630658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.630664] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.630678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.640531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.640600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.640615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.640622] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.640629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.640644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.650662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.650731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.650747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.650754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.650760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.650775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.660589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.660660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.660675] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.660682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.660688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.660704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.670627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.670699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.670715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.670726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.670733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.670747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.680661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.680727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.680743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.680751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.680757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.680773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.690706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.690782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.690807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.690816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.690823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.690843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.700702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.700822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.700838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.700846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.700852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.700868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.710716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.710789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.710804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.710811] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.710818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.710833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.720757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.720828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.635 [2024-07-12 19:26:33.720853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.635 [2024-07-12 19:26:33.720863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.635 [2024-07-12 19:26:33.720870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.635 [2024-07-12 19:26:33.720890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.635 qpair failed and we were unable to recover it. 00:30:27.635 [2024-07-12 19:26:33.730769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.635 [2024-07-12 19:26:33.730836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.636 [2024-07-12 19:26:33.730852] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.636 [2024-07-12 19:26:33.730859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.636 [2024-07-12 19:26:33.730866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.636 [2024-07-12 19:26:33.730882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.636 qpair failed and we were unable to recover it. 00:30:27.636 [2024-07-12 19:26:33.740902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.636 [2024-07-12 19:26:33.740973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.636 [2024-07-12 19:26:33.740989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.636 [2024-07-12 19:26:33.740996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.636 [2024-07-12 19:26:33.741003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.636 [2024-07-12 19:26:33.741018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.636 qpair failed and we were unable to recover it. 00:30:27.636 [2024-07-12 19:26:33.750849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.636 [2024-07-12 19:26:33.750937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.636 [2024-07-12 19:26:33.750962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.636 [2024-07-12 19:26:33.750970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.636 [2024-07-12 19:26:33.750977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.636 [2024-07-12 19:26:33.750998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.636 qpair failed and we were unable to recover it. 00:30:27.636 [2024-07-12 19:26:33.760859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.636 [2024-07-12 19:26:33.760961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.636 [2024-07-12 19:26:33.760978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.636 [2024-07-12 19:26:33.760991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.636 [2024-07-12 19:26:33.760998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.636 [2024-07-12 19:26:33.761014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.636 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.770889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.770961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.770977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.770984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.770991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.771006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.780913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.780980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.780996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.781003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.781009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.781024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.790927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.790996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.791011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.791018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.791024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.791039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.800979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.801045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.801060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.801067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.801073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.801088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.811067] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.811179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.811196] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.811203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.811209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.811224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.821020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.821130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.821147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.821154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.821160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.821176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.831035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.831108] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.831129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.831136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.831143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.831157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.841069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.841145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.841161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.841169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.841176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.841190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.851193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.851262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.851281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.851288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.851295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.851310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.861148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.861219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.861234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.861241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.861248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.861263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.871146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.871220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.871235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.871243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.897 [2024-07-12 19:26:33.871250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.897 [2024-07-12 19:26:33.871265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.897 qpair failed and we were unable to recover it. 00:30:27.897 [2024-07-12 19:26:33.881170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.897 [2024-07-12 19:26:33.881227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.897 [2024-07-12 19:26:33.881243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.897 [2024-07-12 19:26:33.881250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.881256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.881270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.891200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.891265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.891280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.891288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.891294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.891316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.901220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.901290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.901305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.901312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.901318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.901334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.911266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.911341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.911357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.911364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.911372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.911386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.921279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.921348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.921364] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.921372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.921379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.921393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.931239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.931308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.931323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.931330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.931337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.931351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.941359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.941431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.941450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.941458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.941465] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.941479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.951342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.951419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.951434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.951441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.951448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.951462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.961381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.961449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.961464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.961471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.961478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.961493] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.971406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.971474] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.971489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.971496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.971502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.971518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.981428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.981499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.981515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.981522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.981532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.981547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:33.991429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:33.991505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:33.991520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:33.991528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:33.991534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:33.991549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:34.001525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:34.001610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:34.001626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:34.001633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:34.001639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:34.001654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:34.011414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:34.011477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:34.011493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:34.011501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:34.011507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:34.011521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:27.898 [2024-07-12 19:26:34.021515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.898 [2024-07-12 19:26:34.021628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.898 [2024-07-12 19:26:34.021644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.898 [2024-07-12 19:26:34.021651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.898 [2024-07-12 19:26:34.021657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:27.898 [2024-07-12 19:26:34.021672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:27.898 qpair failed and we were unable to recover it. 00:30:28.159 [2024-07-12 19:26:34.031556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.159 [2024-07-12 19:26:34.031634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.159 [2024-07-12 19:26:34.031650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.159 [2024-07-12 19:26:34.031657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.159 [2024-07-12 19:26:34.031663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.159 [2024-07-12 19:26:34.031678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.159 qpair failed and we were unable to recover it. 00:30:28.159 [2024-07-12 19:26:34.041583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.159 [2024-07-12 19:26:34.041650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.159 [2024-07-12 19:26:34.041665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.159 [2024-07-12 19:26:34.041672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.159 [2024-07-12 19:26:34.041678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.159 [2024-07-12 19:26:34.041692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.159 qpair failed and we were unable to recover it. 00:30:28.159 [2024-07-12 19:26:34.051603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.159 [2024-07-12 19:26:34.051671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.159 [2024-07-12 19:26:34.051687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.159 [2024-07-12 19:26:34.051694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.159 [2024-07-12 19:26:34.051700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.159 [2024-07-12 19:26:34.051715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.159 qpair failed and we were unable to recover it. 00:30:28.159 [2024-07-12 19:26:34.061522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.159 [2024-07-12 19:26:34.061590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.159 [2024-07-12 19:26:34.061605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.159 [2024-07-12 19:26:34.061613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.159 [2024-07-12 19:26:34.061619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.159 [2024-07-12 19:26:34.061633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.159 qpair failed and we were unable to recover it. 00:30:28.159 [2024-07-12 19:26:34.071669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.159 [2024-07-12 19:26:34.071742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.159 [2024-07-12 19:26:34.071757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.159 [2024-07-12 19:26:34.071769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.159 [2024-07-12 19:26:34.071775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.159 [2024-07-12 19:26:34.071790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.159 qpair failed and we were unable to recover it. 00:30:28.159 [2024-07-12 19:26:34.081673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.159 [2024-07-12 19:26:34.081739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.159 [2024-07-12 19:26:34.081755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.159 [2024-07-12 19:26:34.081762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.159 [2024-07-12 19:26:34.081769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.159 [2024-07-12 19:26:34.081783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.159 qpair failed and we were unable to recover it. 00:30:28.159 [2024-07-12 19:26:34.091705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.159 [2024-07-12 19:26:34.091782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.159 [2024-07-12 19:26:34.091807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.159 [2024-07-12 19:26:34.091816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.159 [2024-07-12 19:26:34.091823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.159 [2024-07-12 19:26:34.091842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.159 qpair failed and we were unable to recover it. 00:30:28.159 [2024-07-12 19:26:34.101762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.159 [2024-07-12 19:26:34.101838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.159 [2024-07-12 19:26:34.101863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.101871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.101878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.101898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.111677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.111758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.111783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.111792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.111798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.111818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.121792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.121862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.121879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.121886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.121892] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.121908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.131839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.131904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.131920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.131927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.131933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.131948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.141867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.141941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.141966] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.141975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.141981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.142000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.151907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.152023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.152040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.152048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.152054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.152070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.161893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.161964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.161983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.161995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.162001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.162017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.171918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.171997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.172013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.172020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.172027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.172042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.181948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.182019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.182035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.182042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.182048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9bfc000b90 00:30:28.160 [2024-07-12 19:26:34.182062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Write completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 [2024-07-12 19:26:34.182941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:28.160 [2024-07-12 19:26:34.192036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.192118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.192153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.192164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.192171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1aae220 00:30:28.160 [2024-07-12 19:26:34.192191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.201999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.160 [2024-07-12 19:26:34.202077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.160 [2024-07-12 19:26:34.202102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.160 [2024-07-12 19:26:34.202111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.160 [2024-07-12 19:26:34.202118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1aae220 00:30:28.160 [2024-07-12 19:26:34.202141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.160 qpair failed and we were unable to recover it. 00:30:28.160 [2024-07-12 19:26:34.202304] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:28.160 A controller has encountered a failure and is being reset. 00:30:28.160 [2024-07-12 19:26:34.202418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abbf20 (9): Bad file descriptor 00:30:28.160 Controller properly reset. 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.160 starting I/O failed 00:30:28.160 Read completed with error (sct=0, sc=8) 00:30:28.161 starting I/O failed 00:30:28.161 Read completed with error (sct=0, sc=8) 00:30:28.161 starting I/O failed 00:30:28.161 Read completed with error (sct=0, sc=8) 00:30:28.161 starting I/O failed 00:30:28.161 Write completed with error (sct=0, sc=8) 00:30:28.161 starting I/O failed 00:30:28.161 Write completed with error (sct=0, sc=8) 00:30:28.161 starting I/O failed 00:30:28.161 Write completed with error (sct=0, sc=8) 00:30:28.161 starting I/O failed 00:30:28.161 [2024-07-12 19:26:34.257620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.161 Initializing NVMe Controllers 00:30:28.161 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:28.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:28.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:28.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:28.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:28.161 Initialization complete. Launching workers. 00:30:28.161 Starting thread on core 1 00:30:28.161 Starting thread on core 2 00:30:28.161 Starting thread on core 3 00:30:28.161 Starting thread on core 0 00:30:28.161 19:26:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:28.161 00:30:28.161 real 0m11.365s 00:30:28.161 user 0m20.946s 00:30:28.161 sys 0m4.250s 00:30:28.161 19:26:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:28.161 19:26:34 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:28.161 ************************************ 00:30:28.161 END TEST nvmf_target_disconnect_tc2 00:30:28.161 ************************************ 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:28.421 rmmod nvme_tcp 00:30:28.421 rmmod nvme_fabrics 00:30:28.421 rmmod nvme_keyring 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:28.421 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1620758 ']' 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1620758 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1620758 ']' 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1620758 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1620758 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1620758' 00:30:28.422 killing process with pid 1620758 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1620758 00:30:28.422 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1620758 00:30:28.683 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:28.683 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:28.683 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:28.683 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:28.683 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:28.683 19:26:34 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:28.683 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:28.683 19:26:34 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:30.602 19:26:36 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:30.602 00:30:30.602 real 0m20.981s 00:30:30.602 user 0m48.384s 00:30:30.602 sys 0m9.833s 00:30:30.602 19:26:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:30.602 19:26:36 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:30.602 ************************************ 00:30:30.602 END TEST nvmf_target_disconnect 00:30:30.602 ************************************ 00:30:30.602 19:26:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:30.602 19:26:36 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:30.602 19:26:36 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:30.602 19:26:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.864 19:26:36 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:30.864 00:30:30.864 real 22m35.793s 00:30:30.864 user 47m3.733s 00:30:30.864 sys 7m8.155s 00:30:30.864 19:26:36 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:30.864 19:26:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.864 ************************************ 00:30:30.864 END TEST nvmf_tcp 00:30:30.864 ************************************ 00:30:30.864 19:26:36 -- common/autotest_common.sh@1142 -- # return 0 00:30:30.864 19:26:36 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:30.864 19:26:36 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:30.864 19:26:36 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:30.864 19:26:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:30.864 19:26:36 -- common/autotest_common.sh@10 -- # set +x 00:30:30.864 ************************************ 00:30:30.864 START TEST spdkcli_nvmf_tcp 00:30:30.865 ************************************ 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:30.865 * Looking for test storage... 00:30:30.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1622653 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1622653 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1622653 ']' 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:30.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:30.865 19:26:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.126 [2024-07-12 19:26:37.041422] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:30:31.126 [2024-07-12 19:26:37.041494] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1622653 ] 00:30:31.126 EAL: No free 2048 kB hugepages reported on node 1 00:30:31.126 [2024-07-12 19:26:37.102873] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:31.126 [2024-07-12 19:26:37.170042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:31.126 [2024-07-12 19:26:37.170043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:31.698 19:26:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:31.698 19:26:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:31.698 19:26:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:31.698 19:26:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:31.698 19:26:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.959 19:26:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:31.959 19:26:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:31.959 19:26:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:31.959 19:26:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:31.959 19:26:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:31.959 19:26:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:31.959 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:31.959 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:31.959 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:31.959 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:31.959 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:31.959 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:31.959 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:31.959 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:31.959 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:31.959 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:31.960 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:31.960 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:31.960 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:31.960 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:31.960 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:31.960 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:31.960 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:31.960 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:31.960 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:31.960 ' 00:30:34.506 [2024-07-12 19:26:40.428895] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.889 [2024-07-12 19:26:41.725130] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:38.433 [2024-07-12 19:26:44.136233] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:40.345 [2024-07-12 19:26:46.214487] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:41.757 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:41.757 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:41.757 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:41.757 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:41.757 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:41.757 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:41.757 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:41.757 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:41.757 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:41.757 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:41.757 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:41.757 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:42.017 19:26:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:42.017 19:26:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:42.017 19:26:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:42.017 19:26:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:42.017 19:26:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:42.017 19:26:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:42.017 19:26:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:42.017 19:26:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:42.276 19:26:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:42.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:42.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:42.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:42.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:42.276 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:42.276 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:42.276 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:42.276 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:42.276 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:42.276 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:42.276 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:42.276 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:42.276 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:42.276 ' 00:30:47.630 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:47.630 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:47.630 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:47.630 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:47.630 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:47.630 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:47.630 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:47.630 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:47.630 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:47.630 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:47.630 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:47.630 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:47.630 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:47.630 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:47.630 19:26:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:47.630 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:47.630 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.630 19:26:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1622653 00:30:47.630 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1622653 ']' 00:30:47.630 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1622653 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1622653 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1622653' 00:30:47.891 killing process with pid 1622653 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1622653 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1622653 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1622653 ']' 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1622653 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1622653 ']' 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1622653 00:30:47.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1622653) - No such process 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1622653 is not found' 00:30:47.891 Process with pid 1622653 is not found 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:47.891 00:30:47.891 real 0m17.109s 00:30:47.891 user 0m37.386s 00:30:47.891 sys 0m0.871s 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:47.891 19:26:53 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:47.891 ************************************ 00:30:47.891 END TEST spdkcli_nvmf_tcp 00:30:47.891 ************************************ 00:30:47.891 19:26:53 -- common/autotest_common.sh@1142 -- # return 0 00:30:47.891 19:26:53 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:47.891 19:26:53 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:47.891 19:26:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:47.891 19:26:53 -- common/autotest_common.sh@10 -- # set +x 00:30:48.152 ************************************ 00:30:48.152 START TEST nvmf_identify_passthru 00:30:48.152 ************************************ 00:30:48.152 19:26:54 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:48.152 * Looking for test storage... 00:30:48.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:48.152 19:26:54 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.152 19:26:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.152 19:26:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.152 19:26:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.152 19:26:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.152 19:26:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.152 19:26:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.152 19:26:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:48.152 19:26:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:48.152 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:48.152 19:26:54 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:48.152 19:26:54 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:48.152 19:26:54 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:48.152 19:26:54 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:48.152 19:26:54 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.153 19:26:54 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.153 19:26:54 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.153 19:26:54 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:48.153 19:26:54 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:48.153 19:26:54 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.153 19:26:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:48.153 19:26:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:48.153 19:26:54 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:48.153 19:26:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:30:56.299 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:30:56.299 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:30:56.299 Found net devices under 0000:4b:00.0: cvl_0_0 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:30:56.299 Found net devices under 0000:4b:00.1: cvl_0_1 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.299 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:56.300 19:27:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:56.300 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.300 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.468 ms 00:30:56.300 00:30:56.300 --- 10.0.0.2 ping statistics --- 00:30:56.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.300 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:56.300 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.300 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:30:56.300 00:30:56.300 --- 10.0.0.1 ping statistics --- 00:30:56.300 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.300 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:56.300 19:27:01 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:56.300 19:27:01 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:56.300 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605487 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:56.300 19:27:01 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:56.300 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.300 19:27:02 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:30:56.300 19:27:02 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:56.300 19:27:02 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:56.300 19:27:02 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1629859 00:30:56.300 19:27:02 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:56.300 19:27:02 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:56.300 19:27:02 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1629859 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1629859 ']' 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:56.300 19:27:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:56.300 [2024-07-12 19:27:02.422463] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:30:56.300 [2024-07-12 19:27:02.422514] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.561 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.561 [2024-07-12 19:27:02.488145] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:56.561 [2024-07-12 19:27:02.555813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.561 [2024-07-12 19:27:02.555850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.561 [2024-07-12 19:27:02.555858] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.561 [2024-07-12 19:27:02.555865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.561 [2024-07-12 19:27:02.555870] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.561 [2024-07-12 19:27:02.556004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.561 [2024-07-12 19:27:02.556120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.561 [2024-07-12 19:27:02.556276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.561 [2024-07-12 19:27:02.556396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.131 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:57.131 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:30:57.131 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:57.131 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.131 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.131 INFO: Log level set to 20 00:30:57.131 INFO: Requests: 00:30:57.131 { 00:30:57.131 "jsonrpc": "2.0", 00:30:57.131 "method": "nvmf_set_config", 00:30:57.131 "id": 1, 00:30:57.131 "params": { 00:30:57.131 "admin_cmd_passthru": { 00:30:57.131 "identify_ctrlr": true 00:30:57.131 } 00:30:57.131 } 00:30:57.131 } 00:30:57.131 00:30:57.131 INFO: response: 00:30:57.131 { 00:30:57.131 "jsonrpc": "2.0", 00:30:57.131 "id": 1, 00:30:57.131 "result": true 00:30:57.131 } 00:30:57.131 00:30:57.131 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.131 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:57.131 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.131 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.131 INFO: Setting log level to 20 00:30:57.131 INFO: Setting log level to 20 00:30:57.131 INFO: Log level set to 20 00:30:57.131 INFO: Log level set to 20 00:30:57.131 INFO: Requests: 00:30:57.131 { 00:30:57.131 "jsonrpc": "2.0", 00:30:57.131 "method": "framework_start_init", 00:30:57.131 "id": 1 00:30:57.131 } 00:30:57.131 00:30:57.131 INFO: Requests: 00:30:57.131 { 00:30:57.131 "jsonrpc": "2.0", 00:30:57.131 "method": "framework_start_init", 00:30:57.131 "id": 1 00:30:57.131 } 00:30:57.131 00:30:57.391 [2024-07-12 19:27:03.281547] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:57.391 INFO: response: 00:30:57.391 { 00:30:57.391 "jsonrpc": "2.0", 00:30:57.391 "id": 1, 00:30:57.391 "result": true 00:30:57.391 } 00:30:57.391 00:30:57.391 INFO: response: 00:30:57.391 { 00:30:57.391 "jsonrpc": "2.0", 00:30:57.391 "id": 1, 00:30:57.391 "result": true 00:30:57.391 } 00:30:57.391 00:30:57.391 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.391 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:57.391 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.391 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.391 INFO: Setting log level to 40 00:30:57.391 INFO: Setting log level to 40 00:30:57.391 INFO: Setting log level to 40 00:30:57.391 [2024-07-12 19:27:03.294870] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.391 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.391 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:57.391 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:57.391 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.391 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:30:57.391 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.391 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.653 Nvme0n1 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.653 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.653 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.653 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.653 [2024-07-12 19:27:03.677435] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.653 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.653 [ 00:30:57.653 { 00:30:57.653 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:57.653 "subtype": "Discovery", 00:30:57.653 "listen_addresses": [], 00:30:57.653 "allow_any_host": true, 00:30:57.653 "hosts": [] 00:30:57.653 }, 00:30:57.653 { 00:30:57.653 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:57.653 "subtype": "NVMe", 00:30:57.653 "listen_addresses": [ 00:30:57.653 { 00:30:57.653 "trtype": "TCP", 00:30:57.653 "adrfam": "IPv4", 00:30:57.653 "traddr": "10.0.0.2", 00:30:57.653 "trsvcid": "4420" 00:30:57.653 } 00:30:57.653 ], 00:30:57.653 "allow_any_host": true, 00:30:57.653 "hosts": [], 00:30:57.653 "serial_number": "SPDK00000000000001", 00:30:57.653 "model_number": "SPDK bdev Controller", 00:30:57.653 "max_namespaces": 1, 00:30:57.653 "min_cntlid": 1, 00:30:57.653 "max_cntlid": 65519, 00:30:57.653 "namespaces": [ 00:30:57.653 { 00:30:57.653 "nsid": 1, 00:30:57.653 "bdev_name": "Nvme0n1", 00:30:57.653 "name": "Nvme0n1", 00:30:57.653 "nguid": "36344730526054870025384500000044", 00:30:57.653 "uuid": "36344730-5260-5487-0025-384500000044" 00:30:57.653 } 00:30:57.653 ] 00:30:57.653 } 00:30:57.653 ] 00:30:57.653 19:27:03 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.653 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:57.653 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:57.653 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:57.653 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.913 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605487 00:30:57.913 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:57.913 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:57.913 19:27:03 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:57.913 EAL: No free 2048 kB hugepages reported on node 1 00:30:57.913 19:27:04 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:30:57.913 19:27:04 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605487 '!=' S64GNE0R605487 ']' 00:30:57.913 19:27:04 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:30:57.913 19:27:04 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:57.913 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:57.913 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:57.913 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:57.913 19:27:04 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:57.913 19:27:04 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:57.913 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:57.913 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:30:57.913 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:57.913 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:30:57.913 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:57.913 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:57.913 rmmod nvme_tcp 00:30:58.174 rmmod nvme_fabrics 00:30:58.174 rmmod nvme_keyring 00:30:58.174 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:58.174 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:30:58.174 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:30:58.174 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1629859 ']' 00:30:58.174 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1629859 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1629859 ']' 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1629859 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1629859 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1629859' 00:30:58.174 killing process with pid 1629859 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1629859 00:30:58.174 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1629859 00:30:58.436 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:58.436 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:58.436 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:58.436 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:58.436 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:58.436 19:27:04 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:58.436 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:58.436 19:27:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.352 19:27:06 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:00.613 00:31:00.613 real 0m12.460s 00:31:00.613 user 0m9.742s 00:31:00.613 sys 0m5.983s 00:31:00.613 19:27:06 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:00.613 19:27:06 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:00.613 ************************************ 00:31:00.613 END TEST nvmf_identify_passthru 00:31:00.613 ************************************ 00:31:00.613 19:27:06 -- common/autotest_common.sh@1142 -- # return 0 00:31:00.614 19:27:06 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:00.614 19:27:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:00.614 19:27:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:00.614 19:27:06 -- common/autotest_common.sh@10 -- # set +x 00:31:00.614 ************************************ 00:31:00.614 START TEST nvmf_dif 00:31:00.614 ************************************ 00:31:00.614 19:27:06 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:00.614 * Looking for test storage... 00:31:00.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:00.614 19:27:06 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:00.614 19:27:06 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:00.614 19:27:06 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:00.614 19:27:06 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:00.614 19:27:06 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.614 19:27:06 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.614 19:27:06 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.614 19:27:06 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:00.614 19:27:06 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:00.614 19:27:06 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:00.614 19:27:06 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:00.614 19:27:06 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:00.614 19:27:06 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:00.614 19:27:06 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:00.614 19:27:06 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:00.614 19:27:06 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:00.614 19:27:06 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:00.614 19:27:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:31:08.760 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:31:08.760 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:31:08.760 Found net devices under 0000:4b:00.0: cvl_0_0 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:31:08.760 Found net devices under 0000:4b:00.1: cvl_0_1 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:08.760 19:27:13 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:08.761 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:08.761 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:31:08.761 00:31:08.761 --- 10.0.0.2 ping statistics --- 00:31:08.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.761 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:08.761 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:08.761 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.279 ms 00:31:08.761 00:31:08.761 --- 10.0.0.1 ping statistics --- 00:31:08.761 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:08.761 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:08.761 19:27:13 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:10.698 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:10.698 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:10.698 19:27:16 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:10.698 19:27:16 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:10.698 19:27:16 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:10.698 19:27:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1636159 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1636159 00:31:10.698 19:27:16 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:10.698 19:27:16 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1636159 ']' 00:31:10.698 19:27:16 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:10.698 19:27:16 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:10.698 19:27:16 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:10.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:10.698 19:27:16 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:10.698 19:27:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:10.698 [2024-07-12 19:27:16.803963] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:31:10.698 [2024-07-12 19:27:16.804013] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:10.959 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.959 [2024-07-12 19:27:16.870051] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.959 [2024-07-12 19:27:16.937128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:10.959 [2024-07-12 19:27:16.937164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:10.959 [2024-07-12 19:27:16.937171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:10.959 [2024-07-12 19:27:16.937177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:10.959 [2024-07-12 19:27:16.937183] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:10.959 [2024-07-12 19:27:16.937201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:31:11.532 19:27:17 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:11.532 19:27:17 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:11.532 19:27:17 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:11.532 19:27:17 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:11.532 [2024-07-12 19:27:17.607566] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.532 19:27:17 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:11.532 19:27:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:11.532 ************************************ 00:31:11.532 START TEST fio_dif_1_default 00:31:11.532 ************************************ 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.532 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:11.532 bdev_null0 00:31:11.533 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.533 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:11.533 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.533 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:11.794 [2024-07-12 19:27:17.691906] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:11.794 { 00:31:11.794 "params": { 00:31:11.794 "name": "Nvme$subsystem", 00:31:11.794 "trtype": "$TEST_TRANSPORT", 00:31:11.794 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:11.794 "adrfam": "ipv4", 00:31:11.794 "trsvcid": "$NVMF_PORT", 00:31:11.794 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:11.794 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:11.794 "hdgst": ${hdgst:-false}, 00:31:11.794 "ddgst": ${ddgst:-false} 00:31:11.794 }, 00:31:11.794 "method": "bdev_nvme_attach_controller" 00:31:11.794 } 00:31:11.794 EOF 00:31:11.794 )") 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:11.794 "params": { 00:31:11.794 "name": "Nvme0", 00:31:11.794 "trtype": "tcp", 00:31:11.794 "traddr": "10.0.0.2", 00:31:11.794 "adrfam": "ipv4", 00:31:11.794 "trsvcid": "4420", 00:31:11.794 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:11.794 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:11.794 "hdgst": false, 00:31:11.794 "ddgst": false 00:31:11.794 }, 00:31:11.794 "method": "bdev_nvme_attach_controller" 00:31:11.794 }' 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:11.794 19:27:17 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:12.056 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:12.056 fio-3.35 00:31:12.056 Starting 1 thread 00:31:12.056 EAL: No free 2048 kB hugepages reported on node 1 00:31:24.287 00:31:24.287 filename0: (groupid=0, jobs=1): err= 0: pid=1636693: Fri Jul 12 19:27:28 2024 00:31:24.287 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10018msec) 00:31:24.287 slat (nsec): min=5402, max=32020, avg=6104.90, stdev=1338.43 00:31:24.287 clat (usec): min=1108, max=42854, avg=21573.65, stdev=20199.86 00:31:24.287 lat (usec): min=1114, max=42886, avg=21579.76, stdev=20199.86 00:31:24.287 clat percentiles (usec): 00:31:24.287 | 1.00th=[ 1188], 5.00th=[ 1254], 10.00th=[ 1270], 20.00th=[ 1287], 00:31:24.287 | 30.00th=[ 1303], 40.00th=[ 1319], 50.00th=[41681], 60.00th=[41681], 00:31:24.287 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:31:24.287 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:31:24.287 | 99.99th=[42730] 00:31:24.287 bw ( KiB/s): min= 672, max= 768, per=99.86%, avg=740.80, stdev=34.86, samples=20 00:31:24.287 iops : min= 168, max= 192, avg=185.20, stdev= 8.72, samples=20 00:31:24.287 lat (msec) : 2=49.78%, 50=50.22% 00:31:24.287 cpu : usr=95.78%, sys=4.03%, ctx=12, majf=0, minf=240 00:31:24.287 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.287 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.287 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:24.287 00:31:24.287 Run status group 0 (all jobs): 00:31:24.287 READ: bw=741KiB/s (759kB/s), 741KiB/s-741KiB/s (759kB/s-759kB/s), io=7424KiB (7602kB), run=10018-10018msec 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.287 00:31:24.287 real 0m11.235s 00:31:24.287 user 0m22.667s 00:31:24.287 sys 0m0.723s 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:24.287 ************************************ 00:31:24.287 END TEST fio_dif_1_default 00:31:24.287 ************************************ 00:31:24.287 19:27:28 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:24.287 19:27:28 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:24.287 19:27:28 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:24.287 19:27:28 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:24.287 19:27:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:24.287 ************************************ 00:31:24.287 START TEST fio_dif_1_multi_subsystems 00:31:24.287 ************************************ 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.287 bdev_null0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.287 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.288 [2024-07-12 19:27:28.985329] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:24.288 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.288 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:24.288 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:24.288 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:24.288 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:24.288 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.288 19:27:28 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.288 bdev_null1 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:24.288 { 00:31:24.288 "params": { 00:31:24.288 "name": "Nvme$subsystem", 00:31:24.288 "trtype": "$TEST_TRANSPORT", 00:31:24.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.288 "adrfam": "ipv4", 00:31:24.288 "trsvcid": "$NVMF_PORT", 00:31:24.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.288 "hdgst": ${hdgst:-false}, 00:31:24.288 "ddgst": ${ddgst:-false} 00:31:24.288 }, 00:31:24.288 "method": "bdev_nvme_attach_controller" 00:31:24.288 } 00:31:24.288 EOF 00:31:24.288 )") 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:24.288 { 00:31:24.288 "params": { 00:31:24.288 "name": "Nvme$subsystem", 00:31:24.288 "trtype": "$TEST_TRANSPORT", 00:31:24.288 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:24.288 "adrfam": "ipv4", 00:31:24.288 "trsvcid": "$NVMF_PORT", 00:31:24.288 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:24.288 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:24.288 "hdgst": ${hdgst:-false}, 00:31:24.288 "ddgst": ${ddgst:-false} 00:31:24.288 }, 00:31:24.288 "method": "bdev_nvme_attach_controller" 00:31:24.288 } 00:31:24.288 EOF 00:31:24.288 )") 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:24.288 "params": { 00:31:24.288 "name": "Nvme0", 00:31:24.288 "trtype": "tcp", 00:31:24.288 "traddr": "10.0.0.2", 00:31:24.288 "adrfam": "ipv4", 00:31:24.288 "trsvcid": "4420", 00:31:24.288 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:24.288 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:24.288 "hdgst": false, 00:31:24.288 "ddgst": false 00:31:24.288 }, 00:31:24.288 "method": "bdev_nvme_attach_controller" 00:31:24.288 },{ 00:31:24.288 "params": { 00:31:24.288 "name": "Nvme1", 00:31:24.288 "trtype": "tcp", 00:31:24.288 "traddr": "10.0.0.2", 00:31:24.288 "adrfam": "ipv4", 00:31:24.288 "trsvcid": "4420", 00:31:24.288 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:24.288 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:24.288 "hdgst": false, 00:31:24.288 "ddgst": false 00:31:24.288 }, 00:31:24.288 "method": "bdev_nvme_attach_controller" 00:31:24.288 }' 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:24.288 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:24.289 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:24.289 19:27:29 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:24.289 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:24.289 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:24.289 fio-3.35 00:31:24.289 Starting 2 threads 00:31:24.289 EAL: No free 2048 kB hugepages reported on node 1 00:31:34.288 00:31:34.288 filename0: (groupid=0, jobs=1): err= 0: pid=1639196: Fri Jul 12 19:27:40 2024 00:31:34.288 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10004msec) 00:31:34.288 slat (nsec): min=5413, max=33014, avg=6364.97, stdev=1629.24 00:31:34.288 clat (usec): min=41901, max=43063, avg=42012.27, stdev=174.01 00:31:34.288 lat (usec): min=41909, max=43096, avg=42018.64, stdev=174.30 00:31:34.288 clat percentiles (usec): 00:31:34.288 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:34.288 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:34.288 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:34.288 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:34.288 | 99.99th=[43254] 00:31:34.288 bw ( KiB/s): min= 352, max= 384, per=33.90%, avg=380.63, stdev=10.09, samples=19 00:31:34.288 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:31:34.288 lat (msec) : 50=100.00% 00:31:34.288 cpu : usr=96.53%, sys=3.26%, ctx=13, majf=0, minf=162 00:31:34.288 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.288 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.288 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.288 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:34.289 filename1: (groupid=0, jobs=1): err= 0: pid=1639197: Fri Jul 12 19:27:40 2024 00:31:34.289 read: IOPS=185, BW=741KiB/s (759kB/s)(7424KiB/10019msec) 00:31:34.289 slat (nsec): min=5403, max=32331, avg=6183.56, stdev=1297.56 00:31:34.289 clat (usec): min=1043, max=43111, avg=21574.05, stdev=20216.56 00:31:34.289 lat (usec): min=1049, max=43144, avg=21580.23, stdev=20216.55 00:31:34.289 clat percentiles (usec): 00:31:34.289 | 1.00th=[ 1188], 5.00th=[ 1221], 10.00th=[ 1237], 20.00th=[ 1270], 00:31:34.289 | 30.00th=[ 1287], 40.00th=[ 1303], 50.00th=[41157], 60.00th=[41681], 00:31:34.289 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:31:34.289 | 99.00th=[41681], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:31:34.289 | 99.99th=[43254] 00:31:34.289 bw ( KiB/s): min= 672, max= 768, per=66.01%, avg=740.80, stdev=33.28, samples=20 00:31:34.289 iops : min= 168, max= 192, avg=185.20, stdev= 8.32, samples=20 00:31:34.289 lat (msec) : 2=49.78%, 50=50.22% 00:31:34.289 cpu : usr=97.10%, sys=2.70%, ctx=10, majf=0, minf=93 00:31:34.289 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:34.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:34.289 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:34.289 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:34.289 00:31:34.289 Run status group 0 (all jobs): 00:31:34.289 READ: bw=1121KiB/s (1148kB/s), 381KiB/s-741KiB/s (390kB/s-759kB/s), io=11.0MiB (11.5MB), run=10004-10019msec 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.550 00:31:34.550 real 0m11.638s 00:31:34.550 user 0m36.877s 00:31:34.550 sys 0m0.940s 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:34.550 19:27:40 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:34.550 ************************************ 00:31:34.550 END TEST fio_dif_1_multi_subsystems 00:31:34.550 ************************************ 00:31:34.550 19:27:40 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:34.550 19:27:40 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:34.550 19:27:40 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:34.550 19:27:40 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:34.550 19:27:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:34.550 ************************************ 00:31:34.550 START TEST fio_dif_rand_params 00:31:34.550 ************************************ 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.550 bdev_null0 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.550 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:34.811 [2024-07-12 19:27:40.708070] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:34.811 { 00:31:34.811 "params": { 00:31:34.811 "name": "Nvme$subsystem", 00:31:34.811 "trtype": "$TEST_TRANSPORT", 00:31:34.811 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:34.811 "adrfam": "ipv4", 00:31:34.811 "trsvcid": "$NVMF_PORT", 00:31:34.811 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:34.811 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:34.811 "hdgst": ${hdgst:-false}, 00:31:34.811 "ddgst": ${ddgst:-false} 00:31:34.811 }, 00:31:34.811 "method": "bdev_nvme_attach_controller" 00:31:34.811 } 00:31:34.811 EOF 00:31:34.811 )") 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:34.811 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:34.812 "params": { 00:31:34.812 "name": "Nvme0", 00:31:34.812 "trtype": "tcp", 00:31:34.812 "traddr": "10.0.0.2", 00:31:34.812 "adrfam": "ipv4", 00:31:34.812 "trsvcid": "4420", 00:31:34.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:34.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:34.812 "hdgst": false, 00:31:34.812 "ddgst": false 00:31:34.812 }, 00:31:34.812 "method": "bdev_nvme_attach_controller" 00:31:34.812 }' 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:34.812 19:27:40 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:35.073 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:35.073 ... 00:31:35.073 fio-3.35 00:31:35.073 Starting 3 threads 00:31:35.073 EAL: No free 2048 kB hugepages reported on node 1 00:31:41.660 00:31:41.660 filename0: (groupid=0, jobs=1): err= 0: pid=1641401: Fri Jul 12 19:27:46 2024 00:31:41.660 read: IOPS=91, BW=11.5MiB/s (12.0MB/s)(57.5MiB/5006msec) 00:31:41.660 slat (nsec): min=5431, max=32929, avg=7491.70, stdev=2210.77 00:31:41.660 clat (usec): min=7330, max=55131, avg=32612.85, stdev=20199.98 00:31:41.660 lat (usec): min=7336, max=55137, avg=32620.34, stdev=20200.03 00:31:41.660 clat percentiles (usec): 00:31:41.660 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10421], 00:31:41.660 | 30.00th=[11207], 40.00th=[12518], 50.00th=[49546], 60.00th=[50594], 00:31:41.660 | 70.00th=[51119], 80.00th=[51643], 90.00th=[52167], 95.00th=[52691], 00:31:41.660 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:31:41.660 | 99.99th=[55313] 00:31:41.660 bw ( KiB/s): min= 8448, max=20992, per=30.50%, avg=11699.20, stdev=3844.83, samples=10 00:31:41.660 iops : min= 66, max= 164, avg=91.40, stdev=30.04, samples=10 00:31:41.660 lat (msec) : 10=12.83%, 20=33.04%, 50=6.30%, 100=47.83% 00:31:41.660 cpu : usr=97.38%, sys=2.38%, ctx=8, majf=0, minf=75 00:31:41.660 IO depths : 1=19.8%, 2=80.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.660 issued rwts: total=460,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.660 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.660 filename0: (groupid=0, jobs=1): err= 0: pid=1641402: Fri Jul 12 19:27:46 2024 00:31:41.660 read: IOPS=133, BW=16.6MiB/s (17.4MB/s)(83.2MiB/5004msec) 00:31:41.660 slat (nsec): min=5435, max=32506, avg=7582.02, stdev=2075.56 00:31:41.660 clat (usec): min=6842, max=93261, avg=22521.15, stdev=24043.56 00:31:41.660 lat (usec): min=6848, max=93270, avg=22528.73, stdev=24043.97 00:31:41.660 clat percentiles (usec): 00:31:41.660 | 1.00th=[ 7242], 5.00th=[ 7570], 10.00th=[ 7898], 20.00th=[ 8225], 00:31:41.660 | 30.00th=[ 8455], 40.00th=[ 9503], 50.00th=[10159], 60.00th=[10945], 00:31:41.660 | 70.00th=[12649], 80.00th=[50070], 90.00th=[51643], 95.00th=[90702], 00:31:41.660 | 99.00th=[92799], 99.50th=[92799], 99.90th=[92799], 99.95th=[92799], 00:31:41.660 | 99.99th=[92799] 00:31:41.660 bw ( KiB/s): min= 8448, max=27904, per=44.25%, avg=16974.80, stdev=5851.40, samples=10 00:31:41.660 iops : min= 66, max= 218, avg=132.60, stdev=45.73, samples=10 00:31:41.660 lat (msec) : 10=47.60%, 20=27.18%, 50=5.56%, 100=19.67% 00:31:41.660 cpu : usr=96.30%, sys=3.22%, ctx=215, majf=0, minf=115 00:31:41.660 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.660 issued rwts: total=666,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.660 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.660 filename0: (groupid=0, jobs=1): err= 0: pid=1641403: Fri Jul 12 19:27:46 2024 00:31:41.660 read: IOPS=76, BW=9754KiB/s (9988kB/s)(48.0MiB/5039msec) 00:31:41.660 slat (nsec): min=5413, max=32829, avg=7454.84, stdev=1989.95 00:31:41.660 clat (usec): min=9604, max=94065, avg=39346.81, stdev=19123.74 00:31:41.660 lat (usec): min=9612, max=94071, avg=39354.26, stdev=19123.57 00:31:41.660 clat percentiles (usec): 00:31:41.660 | 1.00th=[ 9634], 5.00th=[ 9896], 10.00th=[10159], 20.00th=[11731], 00:31:41.660 | 30.00th=[15533], 40.00th=[50594], 50.00th=[50594], 60.00th=[51119], 00:31:41.660 | 70.00th=[51119], 80.00th=[51643], 90.00th=[52691], 95.00th=[53740], 00:31:41.660 | 99.00th=[55837], 99.50th=[93848], 99.90th=[93848], 99.95th=[93848], 00:31:41.660 | 99.99th=[93848] 00:31:41.660 bw ( KiB/s): min= 7680, max=13824, per=25.43%, avg=9753.60, stdev=1738.16, samples=10 00:31:41.660 iops : min= 60, max= 108, avg=76.20, stdev=13.58, samples=10 00:31:41.660 lat (msec) : 10=7.55%, 20=23.70%, 50=1.82%, 100=66.93% 00:31:41.660 cpu : usr=97.12%, sys=2.66%, ctx=7, majf=0, minf=69 00:31:41.660 IO depths : 1=13.0%, 2=87.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.660 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.660 issued rwts: total=384,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.660 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:41.660 00:31:41.660 Run status group 0 (all jobs): 00:31:41.660 READ: bw=37.5MiB/s (39.3MB/s), 9754KiB/s-16.6MiB/s (9988kB/s-17.4MB/s), io=189MiB (198MB), run=5004-5039msec 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.660 bdev_null0 00:31:41.660 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 [2024-07-12 19:27:46.762702] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 bdev_null1 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 bdev_null2 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.661 { 00:31:41.661 "params": { 00:31:41.661 "name": "Nvme$subsystem", 00:31:41.661 "trtype": "$TEST_TRANSPORT", 00:31:41.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.661 "adrfam": "ipv4", 00:31:41.661 "trsvcid": "$NVMF_PORT", 00:31:41.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.661 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.661 "hdgst": ${hdgst:-false}, 00:31:41.661 "ddgst": ${ddgst:-false} 00:31:41.661 }, 00:31:41.661 "method": "bdev_nvme_attach_controller" 00:31:41.661 } 00:31:41.661 EOF 00:31:41.661 )") 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:41.661 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.661 { 00:31:41.661 "params": { 00:31:41.661 "name": "Nvme$subsystem", 00:31:41.661 "trtype": "$TEST_TRANSPORT", 00:31:41.661 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.661 "adrfam": "ipv4", 00:31:41.661 "trsvcid": "$NVMF_PORT", 00:31:41.661 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.662 "hdgst": ${hdgst:-false}, 00:31:41.662 "ddgst": ${ddgst:-false} 00:31:41.662 }, 00:31:41.662 "method": "bdev_nvme_attach_controller" 00:31:41.662 } 00:31:41.662 EOF 00:31:41.662 )") 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:41.662 { 00:31:41.662 "params": { 00:31:41.662 "name": "Nvme$subsystem", 00:31:41.662 "trtype": "$TEST_TRANSPORT", 00:31:41.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:41.662 "adrfam": "ipv4", 00:31:41.662 "trsvcid": "$NVMF_PORT", 00:31:41.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:41.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:41.662 "hdgst": ${hdgst:-false}, 00:31:41.662 "ddgst": ${ddgst:-false} 00:31:41.662 }, 00:31:41.662 "method": "bdev_nvme_attach_controller" 00:31:41.662 } 00:31:41.662 EOF 00:31:41.662 )") 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:41.662 "params": { 00:31:41.662 "name": "Nvme0", 00:31:41.662 "trtype": "tcp", 00:31:41.662 "traddr": "10.0.0.2", 00:31:41.662 "adrfam": "ipv4", 00:31:41.662 "trsvcid": "4420", 00:31:41.662 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:41.662 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:41.662 "hdgst": false, 00:31:41.662 "ddgst": false 00:31:41.662 }, 00:31:41.662 "method": "bdev_nvme_attach_controller" 00:31:41.662 },{ 00:31:41.662 "params": { 00:31:41.662 "name": "Nvme1", 00:31:41.662 "trtype": "tcp", 00:31:41.662 "traddr": "10.0.0.2", 00:31:41.662 "adrfam": "ipv4", 00:31:41.662 "trsvcid": "4420", 00:31:41.662 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:41.662 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:41.662 "hdgst": false, 00:31:41.662 "ddgst": false 00:31:41.662 }, 00:31:41.662 "method": "bdev_nvme_attach_controller" 00:31:41.662 },{ 00:31:41.662 "params": { 00:31:41.662 "name": "Nvme2", 00:31:41.662 "trtype": "tcp", 00:31:41.662 "traddr": "10.0.0.2", 00:31:41.662 "adrfam": "ipv4", 00:31:41.662 "trsvcid": "4420", 00:31:41.662 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:41.662 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:41.662 "hdgst": false, 00:31:41.662 "ddgst": false 00:31:41.662 }, 00:31:41.662 "method": "bdev_nvme_attach_controller" 00:31:41.662 }' 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:41.662 19:27:46 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.662 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:41.662 ... 00:31:41.662 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:41.662 ... 00:31:41.662 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:41.662 ... 00:31:41.662 fio-3.35 00:31:41.662 Starting 24 threads 00:31:41.662 EAL: No free 2048 kB hugepages reported on node 1 00:31:53.977 00:31:53.977 filename0: (groupid=0, jobs=1): err= 0: pid=1642904: Fri Jul 12 19:27:58 2024 00:31:53.977 read: IOPS=509, BW=2038KiB/s (2087kB/s)(19.9MiB/10002msec) 00:31:53.977 slat (usec): min=5, max=100, avg=13.10, stdev=12.59 00:31:53.977 clat (usec): min=4787, max=60354, avg=31305.48, stdev=5482.75 00:31:53.977 lat (usec): min=4806, max=60360, avg=31318.59, stdev=5483.57 00:31:53.977 clat percentiles (usec): 00:31:53.977 | 1.00th=[ 9372], 5.00th=[21627], 10.00th=[23987], 20.00th=[31065], 00:31:53.977 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.977 | 70.00th=[32637], 80.00th=[32900], 90.00th=[34341], 95.00th=[40109], 00:31:53.977 | 99.00th=[46400], 99.50th=[47973], 99.90th=[60556], 99.95th=[60556], 00:31:53.977 | 99.99th=[60556] 00:31:53.977 bw ( KiB/s): min= 1872, max= 2560, per=4.29%, avg=2037.89, stdev=145.79, samples=19 00:31:53.977 iops : min= 468, max= 640, avg=509.47, stdev=36.45, samples=19 00:31:53.977 lat (msec) : 10=1.08%, 20=2.61%, 50=95.88%, 100=0.43% 00:31:53.977 cpu : usr=98.76%, sys=0.89%, ctx=24, majf=0, minf=55 00:31:53.977 IO depths : 1=3.8%, 2=7.8%, 4=18.7%, 8=60.7%, 16=8.9%, 32=0.0%, >=64=0.0% 00:31:53.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 complete : 0=0.0%, 4=92.5%, 8=2.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 issued rwts: total=5096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.977 filename0: (groupid=0, jobs=1): err= 0: pid=1642905: Fri Jul 12 19:27:58 2024 00:31:53.977 read: IOPS=547, BW=2191KiB/s (2243kB/s)(21.4MiB/10021msec) 00:31:53.977 slat (nsec): min=5580, max=84981, avg=7227.05, stdev=3285.67 00:31:53.977 clat (usec): min=3214, max=35715, avg=29150.94, stdev=5503.42 00:31:53.977 lat (usec): min=3250, max=35722, avg=29158.17, stdev=5502.93 00:31:53.977 clat percentiles (usec): 00:31:53.977 | 1.00th=[ 5997], 5.00th=[19268], 10.00th=[21103], 20.00th=[23462], 00:31:53.977 | 30.00th=[30802], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:53.977 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32900], 95.00th=[33162], 00:31:53.977 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:31:53.977 | 99.99th=[35914] 00:31:53.977 bw ( KiB/s): min= 1920, max= 2688, per=4.62%, avg=2195.89, stdev=256.30, samples=19 00:31:53.977 iops : min= 480, max= 672, avg=548.95, stdev=64.02, samples=19 00:31:53.977 lat (msec) : 4=0.26%, 10=1.20%, 20=4.96%, 50=93.59% 00:31:53.977 cpu : usr=99.30%, sys=0.42%, ctx=15, majf=0, minf=99 00:31:53.977 IO depths : 1=6.2%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:53.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 issued rwts: total=5488,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.977 filename0: (groupid=0, jobs=1): err= 0: pid=1642906: Fri Jul 12 19:27:58 2024 00:31:53.977 read: IOPS=491, BW=1967KiB/s (2014kB/s)(19.2MiB/10013msec) 00:31:53.977 slat (usec): min=5, max=102, avg=18.77, stdev=15.45 00:31:53.977 clat (usec): min=14268, max=54469, avg=32406.18, stdev=3837.44 00:31:53.977 lat (usec): min=14277, max=54495, avg=32424.95, stdev=3838.06 00:31:53.977 clat percentiles (usec): 00:31:53.977 | 1.00th=[20841], 5.00th=[25822], 10.00th=[31065], 20.00th=[31589], 00:31:53.977 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.977 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[39060], 00:31:53.977 | 99.00th=[49546], 99.50th=[50594], 99.90th=[53740], 99.95th=[54264], 00:31:53.977 | 99.99th=[54264] 00:31:53.977 bw ( KiB/s): min= 1872, max= 2048, per=4.14%, avg=1965.00, stdev=50.15, samples=19 00:31:53.977 iops : min= 468, max= 512, avg=491.21, stdev=12.50, samples=19 00:31:53.977 lat (msec) : 20=0.75%, 50=98.64%, 100=0.61% 00:31:53.977 cpu : usr=97.55%, sys=1.28%, ctx=81, majf=0, minf=49 00:31:53.977 IO depths : 1=1.5%, 2=3.3%, 4=13.6%, 8=68.5%, 16=13.0%, 32=0.0%, >=64=0.0% 00:31:53.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 complete : 0=0.0%, 4=92.2%, 8=4.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 issued rwts: total=4924,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.977 filename0: (groupid=0, jobs=1): err= 0: pid=1642907: Fri Jul 12 19:27:58 2024 00:31:53.977 read: IOPS=487, BW=1952KiB/s (1999kB/s)(19.2MiB/10050msec) 00:31:53.977 slat (usec): min=5, max=125, avg=23.95, stdev=18.50 00:31:53.977 clat (usec): min=12086, max=55213, avg=32554.07, stdev=4127.25 00:31:53.977 lat (usec): min=12112, max=55235, avg=32578.03, stdev=4126.90 00:31:53.977 clat percentiles (usec): 00:31:53.977 | 1.00th=[21627], 5.00th=[26608], 10.00th=[31065], 20.00th=[31589], 00:31:53.977 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:53.977 | 70.00th=[32375], 80.00th=[33162], 90.00th=[33817], 95.00th=[40109], 00:31:53.977 | 99.00th=[50594], 99.50th=[52691], 99.90th=[53216], 99.95th=[53216], 00:31:53.977 | 99.99th=[55313] 00:31:53.977 bw ( KiB/s): min= 1843, max= 2048, per=4.10%, avg=1950.26, stdev=62.00, samples=19 00:31:53.977 iops : min= 460, max= 512, avg=487.53, stdev=15.57, samples=19 00:31:53.977 lat (msec) : 20=0.41%, 50=98.29%, 100=1.31% 00:31:53.977 cpu : usr=99.27%, sys=0.41%, ctx=45, majf=0, minf=44 00:31:53.977 IO depths : 1=4.3%, 2=9.6%, 4=22.1%, 8=55.5%, 16=8.5%, 32=0.0%, >=64=0.0% 00:31:53.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 issued rwts: total=4904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.977 filename0: (groupid=0, jobs=1): err= 0: pid=1642908: Fri Jul 12 19:27:58 2024 00:31:53.977 read: IOPS=493, BW=1974KiB/s (2021kB/s)(19.3MiB/10019msec) 00:31:53.977 slat (nsec): min=5612, max=76714, avg=13760.85, stdev=11376.15 00:31:53.977 clat (usec): min=17110, max=52891, avg=32306.85, stdev=1494.06 00:31:53.977 lat (usec): min=17119, max=52900, avg=32320.61, stdev=1493.27 00:31:53.977 clat percentiles (usec): 00:31:53.977 | 1.00th=[24511], 5.00th=[31065], 10.00th=[31327], 20.00th=[31851], 00:31:53.977 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.977 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.977 | 99.00th=[35390], 99.50th=[35914], 99.90th=[43779], 99.95th=[43779], 00:31:53.977 | 99.99th=[52691] 00:31:53.977 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1966.63, stdev=62.74, samples=19 00:31:53.977 iops : min= 480, max= 512, avg=491.58, stdev=15.59, samples=19 00:31:53.977 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:31:53.977 cpu : usr=99.04%, sys=0.63%, ctx=27, majf=0, minf=48 00:31:53.977 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:53.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.977 filename0: (groupid=0, jobs=1): err= 0: pid=1642909: Fri Jul 12 19:27:58 2024 00:31:53.977 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10003msec) 00:31:53.977 slat (usec): min=5, max=131, avg=26.98, stdev=18.94 00:31:53.977 clat (usec): min=12375, max=57547, avg=32213.03, stdev=2110.04 00:31:53.977 lat (usec): min=12381, max=57562, avg=32240.01, stdev=2109.34 00:31:53.977 clat percentiles (usec): 00:31:53.977 | 1.00th=[29492], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:53.977 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:53.977 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.977 | 99.00th=[35390], 99.50th=[36439], 99.90th=[57410], 99.95th=[57410], 00:31:53.977 | 99.99th=[57410] 00:31:53.977 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1960.37, stdev=74.30, samples=19 00:31:53.977 iops : min= 448, max= 512, avg=490.05, stdev=18.67, samples=19 00:31:53.977 lat (msec) : 20=0.37%, 50=99.31%, 100=0.32% 00:31:53.977 cpu : usr=99.24%, sys=0.45%, ctx=21, majf=0, minf=39 00:31:53.977 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:53.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.977 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.977 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.977 filename0: (groupid=0, jobs=1): err= 0: pid=1642910: Fri Jul 12 19:27:58 2024 00:31:53.977 read: IOPS=509, BW=2038KiB/s (2087kB/s)(19.9MiB/10019msec) 00:31:53.977 slat (nsec): min=5580, max=94031, avg=9789.62, stdev=7743.78 00:31:53.977 clat (usec): min=9044, max=48052, avg=31322.68, stdev=3460.03 00:31:53.977 lat (usec): min=9062, max=48059, avg=31332.47, stdev=3460.18 00:31:53.977 clat percentiles (usec): 00:31:53.977 | 1.00th=[16909], 5.00th=[22414], 10.00th=[30016], 20.00th=[31589], 00:31:53.977 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.978 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.978 | 99.00th=[34341], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:31:53.978 | 99.99th=[47973] 00:31:53.978 bw ( KiB/s): min= 1920, max= 2180, per=4.28%, avg=2035.80, stdev=82.45, samples=20 00:31:53.978 iops : min= 480, max= 545, avg=508.95, stdev=20.61, samples=20 00:31:53.978 lat (msec) : 10=0.12%, 20=2.29%, 50=97.59% 00:31:53.978 cpu : usr=99.19%, sys=0.51%, ctx=18, majf=0, minf=79 00:31:53.978 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.3%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 issued rwts: total=5104,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.978 filename0: (groupid=0, jobs=1): err= 0: pid=1642911: Fri Jul 12 19:27:58 2024 00:31:53.978 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.2MiB/10005msec) 00:31:53.978 slat (nsec): min=5674, max=85497, avg=20334.44, stdev=14363.81 00:31:53.978 clat (usec): min=22043, max=44359, avg=32284.32, stdev=1502.21 00:31:53.978 lat (usec): min=22051, max=44374, avg=32304.66, stdev=1501.55 00:31:53.978 clat percentiles (usec): 00:31:53.978 | 1.00th=[28967], 5.00th=[31065], 10.00th=[31327], 20.00th=[31851], 00:31:53.978 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.978 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.978 | 99.00th=[35914], 99.50th=[42206], 99.90th=[44303], 99.95th=[44303], 00:31:53.978 | 99.99th=[44303] 00:31:53.978 bw ( KiB/s): min= 1920, max= 2048, per=4.14%, avg=1966.63, stdev=62.74, samples=19 00:31:53.978 iops : min= 480, max= 512, avg=491.58, stdev=15.59, samples=19 00:31:53.978 lat (msec) : 50=100.00% 00:31:53.978 cpu : usr=96.25%, sys=1.84%, ctx=197, majf=0, minf=48 00:31:53.978 IO depths : 1=6.1%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.978 filename1: (groupid=0, jobs=1): err= 0: pid=1642912: Fri Jul 12 19:27:58 2024 00:31:53.978 read: IOPS=502, BW=2009KiB/s (2057kB/s)(19.6MiB/10004msec) 00:31:53.978 slat (nsec): min=5422, max=90055, avg=15829.66, stdev=13253.52 00:31:53.978 clat (usec): min=2546, max=35089, avg=31729.16, stdev=3987.34 00:31:53.978 lat (usec): min=2561, max=35097, avg=31744.99, stdev=3986.79 00:31:53.978 clat percentiles (usec): 00:31:53.978 | 1.00th=[ 5080], 5.00th=[30802], 10.00th=[31589], 20.00th=[31851], 00:31:53.978 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.978 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.978 | 99.00th=[34341], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:31:53.978 | 99.99th=[34866] 00:31:53.978 bw ( KiB/s): min= 1920, max= 2688, per=4.22%, avg=2007.58, stdev=176.19, samples=19 00:31:53.978 iops : min= 480, max= 672, avg=501.89, stdev=44.05, samples=19 00:31:53.978 lat (msec) : 4=0.54%, 10=1.33%, 20=0.60%, 50=97.53% 00:31:53.978 cpu : usr=98.70%, sys=0.98%, ctx=20, majf=0, minf=46 00:31:53.978 IO depths : 1=6.1%, 2=12.3%, 4=24.8%, 8=50.4%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 issued rwts: total=5024,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.978 filename1: (groupid=0, jobs=1): err= 0: pid=1642913: Fri Jul 12 19:27:58 2024 00:31:53.978 read: IOPS=485, BW=1941KiB/s (1988kB/s)(19.0MiB/10014msec) 00:31:53.978 slat (usec): min=5, max=104, avg=18.10, stdev=14.50 00:31:53.978 clat (usec): min=16143, max=60489, avg=32812.23, stdev=3657.36 00:31:53.978 lat (usec): min=16155, max=60507, avg=32830.33, stdev=3656.81 00:31:53.978 clat percentiles (usec): 00:31:53.978 | 1.00th=[21890], 5.00th=[30540], 10.00th=[31589], 20.00th=[31851], 00:31:53.978 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.978 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[40633], 00:31:53.978 | 99.00th=[47449], 99.50th=[49546], 99.90th=[60556], 99.95th=[60556], 00:31:53.978 | 99.99th=[60556] 00:31:53.978 bw ( KiB/s): min= 1664, max= 2048, per=4.07%, avg=1933.05, stdev=92.22, samples=19 00:31:53.978 iops : min= 416, max= 512, avg=483.26, stdev=23.06, samples=19 00:31:53.978 lat (msec) : 20=0.16%, 50=99.47%, 100=0.37% 00:31:53.978 cpu : usr=98.97%, sys=0.66%, ctx=67, majf=0, minf=38 00:31:53.978 IO depths : 1=2.7%, 2=6.5%, 4=16.9%, 8=62.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:31:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 complete : 0=0.0%, 4=92.3%, 8=3.0%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 issued rwts: total=4860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.978 filename1: (groupid=0, jobs=1): err= 0: pid=1642914: Fri Jul 12 19:27:58 2024 00:31:53.978 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.7MiB/10003msec) 00:31:53.978 slat (usec): min=5, max=102, avg=18.89, stdev=15.86 00:31:53.978 clat (usec): min=4910, max=71360, avg=33290.20, stdev=5388.10 00:31:53.978 lat (usec): min=4915, max=71377, avg=33309.09, stdev=5388.12 00:31:53.978 clat percentiles (usec): 00:31:53.978 | 1.00th=[15401], 5.00th=[28181], 10.00th=[31589], 20.00th=[31851], 00:31:53.978 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32375], 60.00th=[32637], 00:31:53.978 | 70.00th=[33162], 80.00th=[33817], 90.00th=[38536], 95.00th=[41681], 00:31:53.978 | 99.00th=[54264], 99.50th=[56886], 99.90th=[71828], 99.95th=[71828], 00:31:53.978 | 99.99th=[71828] 00:31:53.978 bw ( KiB/s): min= 1539, max= 2000, per=4.01%, avg=1904.79, stdev=101.47, samples=19 00:31:53.978 iops : min= 384, max= 500, avg=476.16, stdev=25.52, samples=19 00:31:53.978 lat (msec) : 10=0.08%, 20=1.79%, 50=96.73%, 100=1.40% 00:31:53.978 cpu : usr=98.69%, sys=0.84%, ctx=55, majf=0, minf=50 00:31:53.978 IO depths : 1=0.1%, 2=0.6%, 4=5.8%, 8=78.8%, 16=14.7%, 32=0.0%, >=64=0.0% 00:31:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 complete : 0=0.0%, 4=90.0%, 8=6.3%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 issued rwts: total=4794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.978 filename1: (groupid=0, jobs=1): err= 0: pid=1642915: Fri Jul 12 19:27:58 2024 00:31:53.978 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.2MiB/10004msec) 00:31:53.978 slat (usec): min=5, max=125, avg=26.72, stdev=16.37 00:31:53.978 clat (usec): min=4743, max=58638, avg=32233.66, stdev=2351.10 00:31:53.978 lat (usec): min=4750, max=58655, avg=32260.37, stdev=2350.42 00:31:53.978 clat percentiles (usec): 00:31:53.978 | 1.00th=[29492], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:53.978 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:53.978 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.978 | 99.00th=[35390], 99.50th=[36963], 99.90th=[58459], 99.95th=[58459], 00:31:53.978 | 99.99th=[58459] 00:31:53.978 bw ( KiB/s): min= 1792, max= 2048, per=4.13%, avg=1960.21, stdev=74.67, samples=19 00:31:53.978 iops : min= 448, max= 512, avg=490.05, stdev=18.67, samples=19 00:31:53.978 lat (msec) : 10=0.12%, 20=0.41%, 50=99.15%, 100=0.32% 00:31:53.978 cpu : usr=97.18%, sys=1.44%, ctx=100, majf=0, minf=43 00:31:53.978 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.978 filename1: (groupid=0, jobs=1): err= 0: pid=1642916: Fri Jul 12 19:27:58 2024 00:31:53.978 read: IOPS=502, BW=2010KiB/s (2058kB/s)(19.6MiB/10006msec) 00:31:53.978 slat (usec): min=5, max=105, avg=16.15, stdev=12.95 00:31:53.978 clat (usec): min=12430, max=55687, avg=31711.19, stdev=4707.55 00:31:53.978 lat (usec): min=12439, max=55697, avg=31727.34, stdev=4709.21 00:31:53.978 clat percentiles (usec): 00:31:53.978 | 1.00th=[20055], 5.00th=[22152], 10.00th=[24773], 20.00th=[31327], 00:31:53.978 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:53.978 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[41157], 00:31:53.978 | 99.00th=[47449], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:31:53.978 | 99.99th=[55837] 00:31:53.978 bw ( KiB/s): min= 1872, max= 2272, per=4.23%, avg=2009.26, stdev=116.28, samples=19 00:31:53.978 iops : min= 468, max= 568, avg=502.32, stdev=29.07, samples=19 00:31:53.978 lat (msec) : 20=0.99%, 50=98.41%, 100=0.60% 00:31:53.978 cpu : usr=98.82%, sys=0.86%, ctx=23, majf=0, minf=55 00:31:53.978 IO depths : 1=4.2%, 2=8.6%, 4=19.3%, 8=59.2%, 16=8.8%, 32=0.0%, >=64=0.0% 00:31:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 complete : 0=0.0%, 4=92.7%, 8=2.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 issued rwts: total=5028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.978 filename1: (groupid=0, jobs=1): err= 0: pid=1642917: Fri Jul 12 19:27:58 2024 00:31:53.978 read: IOPS=493, BW=1975KiB/s (2022kB/s)(19.3MiB/10006msec) 00:31:53.978 slat (usec): min=5, max=122, avg=22.23, stdev=16.98 00:31:53.978 clat (usec): min=14523, max=51842, avg=32218.18, stdev=2393.99 00:31:53.978 lat (usec): min=14530, max=51864, avg=32240.40, stdev=2394.17 00:31:53.978 clat percentiles (usec): 00:31:53.978 | 1.00th=[22938], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:53.978 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32113], 00:31:53.978 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.978 | 99.00th=[37487], 99.50th=[48497], 99.90th=[51643], 99.95th=[51643], 00:31:53.978 | 99.99th=[51643] 00:31:53.978 bw ( KiB/s): min= 1840, max= 2096, per=4.15%, avg=1972.00, stdev=74.80, samples=19 00:31:53.978 iops : min= 460, max= 524, avg=493.00, stdev=18.70, samples=19 00:31:53.978 lat (msec) : 20=0.73%, 50=98.95%, 100=0.32% 00:31:53.978 cpu : usr=99.05%, sys=0.63%, ctx=27, majf=0, minf=37 00:31:53.978 IO depths : 1=5.9%, 2=11.9%, 4=24.5%, 8=51.1%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:53.978 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.978 issued rwts: total=4940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.978 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.978 filename1: (groupid=0, jobs=1): err= 0: pid=1642918: Fri Jul 12 19:27:58 2024 00:31:53.978 read: IOPS=493, BW=1973KiB/s (2021kB/s)(19.3MiB/10017msec) 00:31:53.978 slat (usec): min=5, max=113, avg=23.53, stdev=18.36 00:31:53.978 clat (usec): min=17709, max=64289, avg=32206.59, stdev=4219.53 00:31:53.978 lat (usec): min=17715, max=64296, avg=32230.12, stdev=4220.75 00:31:53.978 clat percentiles (usec): 00:31:53.978 | 1.00th=[20579], 5.00th=[24511], 10.00th=[29492], 20.00th=[31589], 00:31:53.978 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:53.979 | 70.00th=[32375], 80.00th=[32900], 90.00th=[34341], 95.00th=[39060], 00:31:53.979 | 99.00th=[49546], 99.50th=[50594], 99.90th=[56886], 99.95th=[56886], 00:31:53.979 | 99.99th=[64226] 00:31:53.979 bw ( KiB/s): min= 1792, max= 2112, per=4.16%, avg=1976.79, stdev=83.29, samples=19 00:31:53.979 iops : min= 448, max= 528, avg=494.16, stdev=20.79, samples=19 00:31:53.979 lat (msec) : 20=0.97%, 50=98.28%, 100=0.75% 00:31:53.979 cpu : usr=99.07%, sys=0.59%, ctx=55, majf=0, minf=53 00:31:53.979 IO depths : 1=4.1%, 2=8.8%, 4=19.7%, 8=58.4%, 16=9.0%, 32=0.0%, >=64=0.0% 00:31:53.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 complete : 0=0.0%, 4=92.9%, 8=1.9%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 issued rwts: total=4942,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.979 filename1: (groupid=0, jobs=1): err= 0: pid=1642919: Fri Jul 12 19:27:58 2024 00:31:53.979 read: IOPS=485, BW=1943KiB/s (1989kB/s)(19.0MiB/10003msec) 00:31:53.979 slat (usec): min=5, max=106, avg=23.95, stdev=17.34 00:31:53.979 clat (usec): min=4959, max=64977, avg=32766.73, stdev=3997.88 00:31:53.979 lat (usec): min=4964, max=64983, avg=32790.69, stdev=3996.27 00:31:53.979 clat percentiles (usec): 00:31:53.979 | 1.00th=[21103], 5.00th=[30802], 10.00th=[31327], 20.00th=[31589], 00:31:53.979 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.979 | 70.00th=[32637], 80.00th=[33162], 90.00th=[34341], 95.00th=[40109], 00:31:53.979 | 99.00th=[48497], 99.50th=[52691], 99.90th=[56886], 99.95th=[56886], 00:31:53.979 | 99.99th=[64750] 00:31:53.979 bw ( KiB/s): min= 1792, max= 2048, per=4.08%, avg=1937.63, stdev=71.85, samples=19 00:31:53.979 iops : min= 448, max= 512, avg=484.37, stdev=18.05, samples=19 00:31:53.979 lat (msec) : 10=0.08%, 20=0.66%, 50=98.44%, 100=0.82% 00:31:53.979 cpu : usr=99.11%, sys=0.57%, ctx=21, majf=0, minf=56 00:31:53.979 IO depths : 1=2.3%, 2=5.4%, 4=14.0%, 8=66.6%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:53.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 complete : 0=0.0%, 4=91.8%, 8=3.7%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 issued rwts: total=4858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.979 filename2: (groupid=0, jobs=1): err= 0: pid=1642920: Fri Jul 12 19:27:58 2024 00:31:53.979 read: IOPS=492, BW=1970KiB/s (2017kB/s)(19.2MiB/10008msec) 00:31:53.979 slat (usec): min=5, max=106, avg=16.02, stdev=14.31 00:31:53.979 clat (usec): min=19430, max=57222, avg=32350.73, stdev=1353.86 00:31:53.979 lat (usec): min=19437, max=57246, avg=32366.75, stdev=1352.84 00:31:53.979 clat percentiles (usec): 00:31:53.979 | 1.00th=[29492], 5.00th=[31327], 10.00th=[31589], 20.00th=[31851], 00:31:53.979 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.979 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.979 | 99.00th=[35390], 99.50th=[40633], 99.90th=[42730], 99.95th=[43254], 00:31:53.979 | 99.99th=[57410] 00:31:53.979 bw ( KiB/s): min= 1904, max= 2048, per=4.14%, avg=1966.63, stdev=61.37, samples=19 00:31:53.979 iops : min= 476, max= 512, avg=491.58, stdev=15.24, samples=19 00:31:53.979 lat (msec) : 20=0.04%, 50=99.92%, 100=0.04% 00:31:53.979 cpu : usr=99.09%, sys=0.51%, ctx=43, majf=0, minf=43 00:31:53.979 IO depths : 1=5.2%, 2=11.5%, 4=25.0%, 8=51.0%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:53.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.979 filename2: (groupid=0, jobs=1): err= 0: pid=1642921: Fri Jul 12 19:27:58 2024 00:31:53.979 read: IOPS=505, BW=2023KiB/s (2071kB/s)(19.8MiB/10025msec) 00:31:53.979 slat (nsec): min=5580, max=98829, avg=12106.15, stdev=10672.93 00:31:53.979 clat (usec): min=12939, max=56996, avg=31550.55, stdev=5372.56 00:31:53.979 lat (usec): min=12953, max=57004, avg=31562.66, stdev=5374.12 00:31:53.979 clat percentiles (usec): 00:31:53.979 | 1.00th=[18220], 5.00th=[21103], 10.00th=[23725], 20.00th=[28705], 00:31:53.979 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.979 | 70.00th=[32637], 80.00th=[33162], 90.00th=[36963], 95.00th=[40633], 00:31:53.979 | 99.00th=[48497], 99.50th=[51119], 99.90th=[56361], 99.95th=[56886], 00:31:53.979 | 99.99th=[56886] 00:31:53.979 bw ( KiB/s): min= 1872, max= 2292, per=4.25%, avg=2021.80, stdev=109.40, samples=20 00:31:53.979 iops : min= 468, max= 573, avg=505.45, stdev=27.35, samples=20 00:31:53.979 lat (msec) : 20=3.55%, 50=95.74%, 100=0.71% 00:31:53.979 cpu : usr=97.05%, sys=1.62%, ctx=117, majf=0, minf=41 00:31:53.979 IO depths : 1=2.3%, 2=4.8%, 4=14.1%, 8=67.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:31:53.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 complete : 0=0.0%, 4=91.4%, 8=3.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 issued rwts: total=5069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.979 filename2: (groupid=0, jobs=1): err= 0: pid=1642922: Fri Jul 12 19:27:58 2024 00:31:53.979 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10001msec) 00:31:53.979 slat (usec): min=5, max=101, avg=18.30, stdev=15.87 00:31:53.979 clat (usec): min=16774, max=63950, avg=31947.13, stdev=3373.25 00:31:53.979 lat (usec): min=16784, max=63972, avg=31965.43, stdev=3374.31 00:31:53.979 clat percentiles (usec): 00:31:53.979 | 1.00th=[19530], 5.00th=[24773], 10.00th=[31065], 20.00th=[31589], 00:31:53.979 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.979 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[34341], 00:31:53.979 | 99.00th=[45876], 99.50th=[47973], 99.90th=[50594], 99.95th=[50594], 00:31:53.979 | 99.99th=[63701] 00:31:53.979 bw ( KiB/s): min= 1900, max= 2160, per=4.19%, avg=1991.16, stdev=76.90, samples=19 00:31:53.979 iops : min= 475, max= 540, avg=497.79, stdev=19.23, samples=19 00:31:53.979 lat (msec) : 20=1.20%, 50=98.64%, 100=0.16% 00:31:53.979 cpu : usr=99.23%, sys=0.46%, ctx=17, majf=0, minf=51 00:31:53.979 IO depths : 1=5.3%, 2=10.9%, 4=22.8%, 8=53.7%, 16=7.3%, 32=0.0%, >=64=0.0% 00:31:53.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 issued rwts: total=4986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.979 filename2: (groupid=0, jobs=1): err= 0: pid=1642923: Fri Jul 12 19:27:58 2024 00:31:53.979 read: IOPS=494, BW=1979KiB/s (2027kB/s)(19.4MiB/10017msec) 00:31:53.979 slat (usec): min=5, max=105, avg=11.87, stdev=10.45 00:31:53.979 clat (usec): min=13055, max=44364, avg=32235.13, stdev=2141.43 00:31:53.979 lat (usec): min=13064, max=44383, avg=32247.00, stdev=2141.12 00:31:53.979 clat percentiles (usec): 00:31:53.979 | 1.00th=[21627], 5.00th=[31065], 10.00th=[31589], 20.00th=[31851], 00:31:53.979 | 30.00th=[32113], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.979 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33817], 95.00th=[33817], 00:31:53.979 | 99.00th=[40109], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:31:53.979 | 99.99th=[44303] 00:31:53.979 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1976.40, stdev=76.45, samples=20 00:31:53.979 iops : min= 448, max= 512, avg=494.10, stdev=19.11, samples=20 00:31:53.979 lat (msec) : 20=0.56%, 50=99.44% 00:31:53.979 cpu : usr=99.06%, sys=0.54%, ctx=69, majf=0, minf=53 00:31:53.979 IO depths : 1=5.6%, 2=11.8%, 4=24.6%, 8=51.1%, 16=6.9%, 32=0.0%, >=64=0.0% 00:31:53.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 issued rwts: total=4957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.979 filename2: (groupid=0, jobs=1): err= 0: pid=1642924: Fri Jul 12 19:27:58 2024 00:31:53.979 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10002msec) 00:31:53.979 slat (usec): min=5, max=123, avg=26.21, stdev=19.06 00:31:53.979 clat (usec): min=12352, max=70204, avg=32230.94, stdev=2185.23 00:31:53.979 lat (usec): min=12359, max=70220, avg=32257.15, stdev=2184.27 00:31:53.979 clat percentiles (usec): 00:31:53.979 | 1.00th=[29492], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:53.979 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:53.979 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.979 | 99.00th=[35914], 99.50th=[36439], 99.90th=[56886], 99.95th=[56886], 00:31:53.979 | 99.99th=[69731] 00:31:53.979 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1960.37, stdev=74.30, samples=19 00:31:53.979 iops : min= 448, max= 512, avg=490.05, stdev=18.67, samples=19 00:31:53.979 lat (msec) : 20=0.37%, 50=99.31%, 100=0.32% 00:31:53.979 cpu : usr=99.32%, sys=0.36%, ctx=20, majf=0, minf=39 00:31:53.979 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:53.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.979 filename2: (groupid=0, jobs=1): err= 0: pid=1642925: Fri Jul 12 19:27:58 2024 00:31:53.979 read: IOPS=492, BW=1971KiB/s (2018kB/s)(19.2MiB/10003msec) 00:31:53.979 slat (usec): min=5, max=129, avg=25.43, stdev=17.65 00:31:53.979 clat (usec): min=12237, max=70510, avg=32235.04, stdev=2222.55 00:31:53.979 lat (usec): min=12252, max=70527, avg=32260.47, stdev=2221.59 00:31:53.979 clat percentiles (usec): 00:31:53.979 | 1.00th=[29230], 5.00th=[31065], 10.00th=[31327], 20.00th=[31589], 00:31:53.979 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32113], 00:31:53.979 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33424], 95.00th=[33817], 00:31:53.979 | 99.00th=[35914], 99.50th=[38536], 99.90th=[57410], 99.95th=[57410], 00:31:53.979 | 99.99th=[70779] 00:31:53.979 bw ( KiB/s): min= 1795, max= 2048, per=4.13%, avg=1960.37, stdev=74.30, samples=19 00:31:53.979 iops : min= 448, max= 512, avg=490.05, stdev=18.67, samples=19 00:31:53.979 lat (msec) : 20=0.32%, 50=99.35%, 100=0.32% 00:31:53.979 cpu : usr=99.19%, sys=0.46%, ctx=70, majf=0, minf=48 00:31:53.979 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:31:53.979 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.979 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.979 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.979 filename2: (groupid=0, jobs=1): err= 0: pid=1642926: Fri Jul 12 19:27:58 2024 00:31:53.979 read: IOPS=494, BW=1978KiB/s (2026kB/s)(19.4MiB/10017msec) 00:31:53.979 slat (usec): min=5, max=124, avg=24.47, stdev=17.76 00:31:53.979 clat (usec): min=16896, max=51846, avg=32147.82, stdev=2963.92 00:31:53.979 lat (usec): min=16933, max=51852, avg=32172.28, stdev=2964.94 00:31:53.979 clat percentiles (usec): 00:31:53.979 | 1.00th=[21365], 5.00th=[28443], 10.00th=[31327], 20.00th=[31589], 00:31:53.979 | 30.00th=[31851], 40.00th=[32113], 50.00th=[32113], 60.00th=[32375], 00:31:53.979 | 70.00th=[32637], 80.00th=[32900], 90.00th=[33424], 95.00th=[34866], 00:31:53.980 | 99.00th=[41681], 99.50th=[47449], 99.90th=[51643], 99.95th=[51643], 00:31:53.980 | 99.99th=[51643] 00:31:53.980 bw ( KiB/s): min= 1904, max= 2096, per=4.15%, avg=1970.89, stdev=63.38, samples=19 00:31:53.980 iops : min= 476, max= 524, avg=492.68, stdev=15.80, samples=19 00:31:53.980 lat (msec) : 20=0.40%, 50=99.27%, 100=0.32% 00:31:53.980 cpu : usr=98.94%, sys=0.69%, ctx=62, majf=0, minf=37 00:31:53.980 IO depths : 1=4.7%, 2=9.6%, 4=22.3%, 8=55.3%, 16=8.1%, 32=0.0%, >=64=0.0% 00:31:53.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.980 complete : 0=0.0%, 4=93.7%, 8=0.8%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.980 issued rwts: total=4954,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.980 filename2: (groupid=0, jobs=1): err= 0: pid=1642927: Fri Jul 12 19:27:58 2024 00:31:53.980 read: IOPS=493, BW=1974KiB/s (2022kB/s)(19.3MiB/10001msec) 00:31:53.980 slat (usec): min=5, max=124, avg=21.96, stdev=17.62 00:31:53.980 clat (usec): min=15335, max=69336, avg=32240.82, stdev=3950.71 00:31:53.980 lat (usec): min=15344, max=69358, avg=32262.78, stdev=3950.76 00:31:53.980 clat percentiles (usec): 00:31:53.980 | 1.00th=[19530], 5.00th=[25297], 10.00th=[31065], 20.00th=[31589], 00:31:53.980 | 30.00th=[31851], 40.00th=[31851], 50.00th=[32113], 60.00th=[32375], 00:31:53.980 | 70.00th=[32375], 80.00th=[32900], 90.00th=[33817], 95.00th=[36439], 00:31:53.980 | 99.00th=[49546], 99.50th=[51643], 99.90th=[54264], 99.95th=[54264], 00:31:53.980 | 99.99th=[69731] 00:31:53.980 bw ( KiB/s): min= 1795, max= 2048, per=4.15%, avg=1970.21, stdev=80.32, samples=19 00:31:53.980 iops : min= 448, max= 512, avg=492.47, stdev=20.13, samples=19 00:31:53.980 lat (msec) : 20=1.52%, 50=97.53%, 100=0.95% 00:31:53.980 cpu : usr=99.07%, sys=0.59%, ctx=60, majf=0, minf=62 00:31:53.980 IO depths : 1=3.8%, 2=8.3%, 4=19.3%, 8=58.9%, 16=9.8%, 32=0.0%, >=64=0.0% 00:31:53.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.980 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:53.980 issued rwts: total=4936,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:53.980 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:53.980 00:31:53.980 Run status group 0 (all jobs): 00:31:53.980 READ: bw=46.4MiB/s (48.7MB/s), 1917KiB/s-2191KiB/s (1963kB/s-2243kB/s), io=466MiB (489MB), run=10001-10050msec 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 bdev_null0 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 [2024-07-12 19:27:58.621381] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 bdev_null1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:53.980 { 00:31:53.980 "params": { 00:31:53.980 "name": "Nvme$subsystem", 00:31:53.980 "trtype": "$TEST_TRANSPORT", 00:31:53.980 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.980 "adrfam": "ipv4", 00:31:53.980 "trsvcid": "$NVMF_PORT", 00:31:53.980 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.980 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.980 "hdgst": ${hdgst:-false}, 00:31:53.980 "ddgst": ${ddgst:-false} 00:31:53.980 }, 00:31:53.980 "method": "bdev_nvme_attach_controller" 00:31:53.980 } 00:31:53.980 EOF 00:31:53.980 )") 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:53.980 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:53.981 { 00:31:53.981 "params": { 00:31:53.981 "name": "Nvme$subsystem", 00:31:53.981 "trtype": "$TEST_TRANSPORT", 00:31:53.981 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:53.981 "adrfam": "ipv4", 00:31:53.981 "trsvcid": "$NVMF_PORT", 00:31:53.981 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:53.981 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:53.981 "hdgst": ${hdgst:-false}, 00:31:53.981 "ddgst": ${ddgst:-false} 00:31:53.981 }, 00:31:53.981 "method": "bdev_nvme_attach_controller" 00:31:53.981 } 00:31:53.981 EOF 00:31:53.981 )") 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:53.981 "params": { 00:31:53.981 "name": "Nvme0", 00:31:53.981 "trtype": "tcp", 00:31:53.981 "traddr": "10.0.0.2", 00:31:53.981 "adrfam": "ipv4", 00:31:53.981 "trsvcid": "4420", 00:31:53.981 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:53.981 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:53.981 "hdgst": false, 00:31:53.981 "ddgst": false 00:31:53.981 }, 00:31:53.981 "method": "bdev_nvme_attach_controller" 00:31:53.981 },{ 00:31:53.981 "params": { 00:31:53.981 "name": "Nvme1", 00:31:53.981 "trtype": "tcp", 00:31:53.981 "traddr": "10.0.0.2", 00:31:53.981 "adrfam": "ipv4", 00:31:53.981 "trsvcid": "4420", 00:31:53.981 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:53.981 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:53.981 "hdgst": false, 00:31:53.981 "ddgst": false 00:31:53.981 }, 00:31:53.981 "method": "bdev_nvme_attach_controller" 00:31:53.981 }' 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:53.981 19:27:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:53.981 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:53.981 ... 00:31:53.981 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:53.981 ... 00:31:53.981 fio-3.35 00:31:53.981 Starting 4 threads 00:31:53.981 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.324 00:31:59.324 filename0: (groupid=0, jobs=1): err= 0: pid=1645275: Fri Jul 12 19:28:04 2024 00:31:59.324 read: IOPS=1896, BW=14.8MiB/s (15.5MB/s)(74.1MiB/5003msec) 00:31:59.324 slat (nsec): min=7879, max=41232, avg=8728.03, stdev=1926.74 00:31:59.324 clat (usec): min=1678, max=7219, avg=4193.53, stdev=744.41 00:31:59.324 lat (usec): min=1697, max=7227, avg=4202.25, stdev=744.36 00:31:59.324 clat percentiles (usec): 00:31:59.324 | 1.00th=[ 2933], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3589], 00:31:59.324 | 30.00th=[ 3785], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4146], 00:31:59.324 | 70.00th=[ 4490], 80.00th=[ 4817], 90.00th=[ 5342], 95.00th=[ 5604], 00:31:59.324 | 99.00th=[ 6259], 99.50th=[ 6521], 99.90th=[ 6915], 99.95th=[ 7177], 00:31:59.324 | 99.99th=[ 7242] 00:31:59.324 bw ( KiB/s): min=14464, max=16352, per=23.14%, avg=15171.20, stdev=597.80, samples=10 00:31:59.324 iops : min= 1808, max= 2044, avg=1896.40, stdev=74.72, samples=10 00:31:59.324 lat (msec) : 2=0.06%, 4=52.76%, 10=47.18% 00:31:59.324 cpu : usr=97.00%, sys=2.72%, ctx=39, majf=0, minf=1 00:31:59.324 IO depths : 1=0.4%, 2=1.4%, 4=71.7%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.324 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.324 issued rwts: total=9487,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:59.324 filename0: (groupid=0, jobs=1): err= 0: pid=1645276: Fri Jul 12 19:28:04 2024 00:31:59.324 read: IOPS=2082, BW=16.3MiB/s (17.1MB/s)(81.4MiB/5001msec) 00:31:59.324 slat (usec): min=5, max=102, avg= 7.81, stdev= 3.87 00:31:59.324 clat (usec): min=603, max=7096, avg=3818.97, stdev=727.70 00:31:59.324 lat (usec): min=612, max=7102, avg=3826.78, stdev=727.34 00:31:59.324 clat percentiles (usec): 00:31:59.324 | 1.00th=[ 2409], 5.00th=[ 2802], 10.00th=[ 3032], 20.00th=[ 3294], 00:31:59.324 | 30.00th=[ 3458], 40.00th=[ 3621], 50.00th=[ 3752], 60.00th=[ 3851], 00:31:59.324 | 70.00th=[ 3949], 80.00th=[ 4228], 90.00th=[ 4817], 95.00th=[ 5342], 00:31:59.324 | 99.00th=[ 6063], 99.50th=[ 6194], 99.90th=[ 6521], 99.95th=[ 6915], 00:31:59.324 | 99.99th=[ 7111] 00:31:59.324 bw ( KiB/s): min=15984, max=17248, per=25.31%, avg=16595.56, stdev=431.61, samples=9 00:31:59.324 iops : min= 1998, max= 2156, avg=2074.44, stdev=53.95, samples=9 00:31:59.324 lat (usec) : 750=0.01%, 1000=0.01% 00:31:59.324 lat (msec) : 2=0.28%, 4=71.67%, 10=28.03% 00:31:59.324 cpu : usr=88.36%, sys=6.22%, ctx=139, majf=0, minf=9 00:31:59.324 IO depths : 1=0.5%, 2=1.8%, 4=69.8%, 8=27.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.324 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.324 issued rwts: total=10416,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:59.324 filename1: (groupid=0, jobs=1): err= 0: pid=1645277: Fri Jul 12 19:28:04 2024 00:31:59.324 read: IOPS=2411, BW=18.8MiB/s (19.8MB/s)(94.2MiB/5001msec) 00:31:59.324 slat (nsec): min=5399, max=40214, avg=5996.88, stdev=1669.99 00:31:59.324 clat (usec): min=687, max=6386, avg=3299.27, stdev=611.93 00:31:59.324 lat (usec): min=693, max=6394, avg=3305.27, stdev=611.92 00:31:59.324 clat percentiles (usec): 00:31:59.324 | 1.00th=[ 1876], 5.00th=[ 2343], 10.00th=[ 2573], 20.00th=[ 2835], 00:31:59.324 | 30.00th=[ 2966], 40.00th=[ 3130], 50.00th=[ 3261], 60.00th=[ 3458], 00:31:59.324 | 70.00th=[ 3621], 80.00th=[ 3818], 90.00th=[ 3949], 95.00th=[ 4228], 00:31:59.324 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5997], 99.95th=[ 6063], 00:31:59.324 | 99.99th=[ 6390] 00:31:59.324 bw ( KiB/s): min=16688, max=20688, per=29.54%, avg=19368.89, stdev=1428.44, samples=9 00:31:59.324 iops : min= 2086, max= 2586, avg=2421.11, stdev=178.56, samples=9 00:31:59.324 lat (usec) : 750=0.01%, 1000=0.03% 00:31:59.324 lat (msec) : 2=1.42%, 4=88.93%, 10=9.61% 00:31:59.324 cpu : usr=97.40%, sys=2.32%, ctx=7, majf=0, minf=9 00:31:59.324 IO depths : 1=0.1%, 2=8.6%, 4=61.1%, 8=30.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.324 complete : 0=0.0%, 4=94.7%, 8=5.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.324 issued rwts: total=12062,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:59.324 filename1: (groupid=0, jobs=1): err= 0: pid=1645278: Fri Jul 12 19:28:04 2024 00:31:59.324 read: IOPS=1856, BW=14.5MiB/s (15.2MB/s)(73.1MiB/5043msec) 00:31:59.324 slat (nsec): min=5399, max=27808, avg=6007.68, stdev=1551.77 00:31:59.324 clat (usec): min=2026, max=45749, avg=4270.51, stdev=1592.79 00:31:59.324 lat (usec): min=2031, max=45777, avg=4276.52, stdev=1592.92 00:31:59.324 clat percentiles (usec): 00:31:59.324 | 1.00th=[ 2933], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3621], 00:31:59.324 | 30.00th=[ 3785], 40.00th=[ 3851], 50.00th=[ 3949], 60.00th=[ 4178], 00:31:59.324 | 70.00th=[ 4490], 80.00th=[ 4948], 90.00th=[ 5407], 95.00th=[ 5735], 00:31:59.324 | 99.00th=[ 6325], 99.50th=[ 6587], 99.90th=[42730], 99.95th=[45876], 00:31:59.324 | 99.99th=[45876] 00:31:59.324 bw ( KiB/s): min=13402, max=16256, per=22.83%, avg=14965.80, stdev=767.36, samples=10 00:31:59.324 iops : min= 1675, max= 2032, avg=1870.70, stdev=95.98, samples=10 00:31:59.324 lat (msec) : 4=52.79%, 10=47.09%, 50=0.12% 00:31:59.324 cpu : usr=97.26%, sys=2.50%, ctx=6, majf=0, minf=0 00:31:59.324 IO depths : 1=0.5%, 2=1.9%, 4=70.6%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:59.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.324 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.324 issued rwts: total=9360,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.324 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:59.324 00:31:59.324 Run status group 0 (all jobs): 00:31:59.324 READ: bw=64.0MiB/s (67.1MB/s), 14.5MiB/s-18.8MiB/s (15.2MB/s-19.8MB/s), io=323MiB (339MB), run=5001-5043msec 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.324 00:31:59.324 real 0m24.438s 00:31:59.324 user 5m19.112s 00:31:59.324 sys 0m3.930s 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:59.324 19:28:05 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.324 ************************************ 00:31:59.324 END TEST fio_dif_rand_params 00:31:59.324 ************************************ 00:31:59.324 19:28:05 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:59.324 19:28:05 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:59.324 19:28:05 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:59.324 19:28:05 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:59.324 19:28:05 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:59.324 ************************************ 00:31:59.324 START TEST fio_dif_digest 00:31:59.324 ************************************ 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:59.324 bdev_null0 00:31:59.324 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:31:59.325 [2024-07-12 19:28:05.199740] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:59.325 { 00:31:59.325 "params": { 00:31:59.325 "name": "Nvme$subsystem", 00:31:59.325 "trtype": "$TEST_TRANSPORT", 00:31:59.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:59.325 "adrfam": "ipv4", 00:31:59.325 "trsvcid": "$NVMF_PORT", 00:31:59.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:59.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:59.325 "hdgst": ${hdgst:-false}, 00:31:59.325 "ddgst": ${ddgst:-false} 00:31:59.325 }, 00:31:59.325 "method": "bdev_nvme_attach_controller" 00:31:59.325 } 00:31:59.325 EOF 00:31:59.325 )") 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:59.325 "params": { 00:31:59.325 "name": "Nvme0", 00:31:59.325 "trtype": "tcp", 00:31:59.325 "traddr": "10.0.0.2", 00:31:59.325 "adrfam": "ipv4", 00:31:59.325 "trsvcid": "4420", 00:31:59.325 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:59.325 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:59.325 "hdgst": true, 00:31:59.325 "ddgst": true 00:31:59.325 }, 00:31:59.325 "method": "bdev_nvme_attach_controller" 00:31:59.325 }' 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:59.325 19:28:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.586 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:59.586 ... 00:31:59.586 fio-3.35 00:31:59.586 Starting 3 threads 00:31:59.586 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.826 00:32:11.826 filename0: (groupid=0, jobs=1): err= 0: pid=1646632: Fri Jul 12 19:28:16 2024 00:32:11.826 read: IOPS=151, BW=18.9MiB/s (19.8MB/s)(189MiB/10008msec) 00:32:11.826 slat (nsec): min=5638, max=43492, avg=6592.05, stdev=1491.97 00:32:11.826 clat (msec): min=7, max=135, avg=19.85, stdev=15.76 00:32:11.826 lat (msec): min=7, max=135, avg=19.85, stdev=15.76 00:32:11.826 clat percentiles (msec): 00:32:11.826 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:32:11.826 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 15], 60.00th=[ 16], 00:32:11.826 | 70.00th=[ 16], 80.00th=[ 17], 90.00th=[ 54], 95.00th=[ 56], 00:32:11.826 | 99.00th=[ 59], 99.50th=[ 94], 99.90th=[ 99], 99.95th=[ 136], 00:32:11.826 | 99.99th=[ 136] 00:32:11.826 bw ( KiB/s): min=15360, max=22738, per=26.65%, avg=19325.70, stdev=2353.75, samples=20 00:32:11.826 iops : min= 120, max= 177, avg=150.95, stdev=18.34, samples=20 00:32:11.826 lat (msec) : 10=2.31%, 20=84.06%, 100=13.56%, 250=0.07% 00:32:11.826 cpu : usr=96.36%, sys=3.43%, ctx=15, majf=0, minf=176 00:32:11.826 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.826 issued rwts: total=1512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.826 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:11.826 filename0: (groupid=0, jobs=1): err= 0: pid=1646633: Fri Jul 12 19:28:16 2024 00:32:11.826 read: IOPS=161, BW=20.2MiB/s (21.2MB/s)(203MiB/10046msec) 00:32:11.826 slat (nsec): min=5679, max=36531, avg=6561.13, stdev=1366.45 00:32:11.826 clat (usec): min=7525, max=97009, avg=18532.72, stdev=13941.99 00:32:11.826 lat (usec): min=7531, max=97015, avg=18539.28, stdev=13941.96 00:32:11.826 clat percentiles (usec): 00:32:11.826 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[11076], 20.00th=[11863], 00:32:11.826 | 30.00th=[12780], 40.00th=[13698], 50.00th=[14222], 60.00th=[14746], 00:32:11.826 | 70.00th=[15401], 80.00th=[16188], 90.00th=[52691], 95.00th=[54789], 00:32:11.826 | 99.00th=[56886], 99.50th=[58459], 99.90th=[95945], 99.95th=[96994], 00:32:11.826 | 99.99th=[96994] 00:32:11.826 bw ( KiB/s): min=13824, max=26112, per=28.62%, avg=20748.80, stdev=3133.67, samples=20 00:32:11.826 iops : min= 108, max= 204, avg=162.10, stdev=24.48, samples=20 00:32:11.826 lat (msec) : 10=3.20%, 20=85.34%, 50=0.06%, 100=11.40% 00:32:11.826 cpu : usr=97.14%, sys=2.63%, ctx=19, majf=0, minf=134 00:32:11.826 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.826 issued rwts: total=1623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.826 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:11.826 filename0: (groupid=0, jobs=1): err= 0: pid=1646634: Fri Jul 12 19:28:16 2024 00:32:11.826 read: IOPS=254, BW=31.8MiB/s (33.3MB/s)(320MiB/10047msec) 00:32:11.826 slat (nsec): min=5676, max=36227, avg=6549.66, stdev=1090.27 00:32:11.826 clat (msec): min=5, max=134, avg=11.77, stdev= 5.81 00:32:11.826 lat (msec): min=5, max=134, avg=11.77, stdev= 5.81 00:32:11.826 clat percentiles (msec): 00:32:11.826 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 10], 00:32:11.826 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 13], 00:32:11.826 | 70.00th=[ 13], 80.00th=[ 14], 90.00th=[ 15], 95.00th=[ 15], 00:32:11.826 | 99.00th=[ 52], 99.50th=[ 54], 99.90th=[ 93], 99.95th=[ 94], 00:32:11.826 | 99.99th=[ 134] 00:32:11.826 bw ( KiB/s): min=26368, max=36352, per=45.09%, avg=32691.20, stdev=2535.77, samples=20 00:32:11.826 iops : min= 206, max= 284, avg=255.40, stdev=19.81, samples=20 00:32:11.826 lat (msec) : 10=30.99%, 20=67.92%, 50=0.04%, 100=1.02%, 250=0.04% 00:32:11.826 cpu : usr=95.59%, sys=4.08%, ctx=26, majf=0, minf=150 00:32:11.826 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:11.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.826 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:11.826 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:11.826 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:11.826 00:32:11.826 Run status group 0 (all jobs): 00:32:11.826 READ: bw=70.8MiB/s (74.2MB/s), 18.9MiB/s-31.8MiB/s (19.8MB/s-33.3MB/s), io=711MiB (746MB), run=10008-10047msec 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.826 00:32:11.826 real 0m11.216s 00:32:11.826 user 0m45.969s 00:32:11.826 sys 0m1.356s 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:11.826 19:28:16 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:11.826 ************************************ 00:32:11.826 END TEST fio_dif_digest 00:32:11.826 ************************************ 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:11.826 19:28:16 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:11.826 19:28:16 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:11.826 rmmod nvme_tcp 00:32:11.826 rmmod nvme_fabrics 00:32:11.826 rmmod nvme_keyring 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1636159 ']' 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1636159 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1636159 ']' 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1636159 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1636159 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1636159' 00:32:11.826 killing process with pid 1636159 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1636159 00:32:11.826 19:28:16 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1636159 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:11.826 19:28:16 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:14.372 Waiting for block devices as requested 00:32:14.372 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:14.372 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:14.372 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:14.372 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:14.372 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:14.372 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:14.372 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:14.632 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:14.632 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:14.893 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:14.893 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:14.893 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:14.893 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:15.154 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:15.154 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:15.154 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:15.154 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:15.414 19:28:21 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:15.414 19:28:21 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:15.414 19:28:21 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:15.414 19:28:21 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:15.414 19:28:21 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:15.414 19:28:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:15.414 19:28:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.958 19:28:23 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:17.958 00:32:17.958 real 1m17.049s 00:32:17.958 user 8m6.941s 00:32:17.958 sys 0m18.994s 00:32:17.958 19:28:23 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:17.958 19:28:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:17.958 ************************************ 00:32:17.958 END TEST nvmf_dif 00:32:17.958 ************************************ 00:32:17.958 19:28:23 -- common/autotest_common.sh@1142 -- # return 0 00:32:17.958 19:28:23 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:17.958 19:28:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:17.958 19:28:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:17.958 19:28:23 -- common/autotest_common.sh@10 -- # set +x 00:32:17.958 ************************************ 00:32:17.958 START TEST nvmf_abort_qd_sizes 00:32:17.958 ************************************ 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:17.958 * Looking for test storage... 00:32:17.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:17.958 19:28:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.0 (0x8086 - 0x159b)' 00:32:26.100 Found 0000:4b:00.0 (0x8086 - 0x159b) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:4b:00.1 (0x8086 - 0x159b)' 00:32:26.100 Found 0000:4b:00.1 (0x8086 - 0x159b) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.0: cvl_0_0' 00:32:26.100 Found net devices under 0000:4b:00.0: cvl_0_0 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:4b:00.1: cvl_0_1' 00:32:26.100 Found net devices under 0000:4b:00.1: cvl_0_1 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:26.100 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:26.101 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:26.101 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:26.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:26.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:32:26.101 00:32:26.101 --- 10.0.0.2 ping statistics --- 00:32:26.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.101 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:32:26.101 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:26.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:26.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.342 ms 00:32:26.101 00:32:26.101 --- 10.0.0.1 ping statistics --- 00:32:26.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:26.101 rtt min/avg/max/mdev = 0.342/0.342/0.342/0.000 ms 00:32:26.101 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:26.101 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:26.101 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:26.101 19:28:30 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:28.648 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:28.648 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1656063 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1656063 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1656063 ']' 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:28.908 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:28.908 19:28:34 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:28.908 [2024-07-12 19:28:34.917131] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:32:28.908 [2024-07-12 19:28:34.917195] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:28.908 EAL: No free 2048 kB hugepages reported on node 1 00:32:28.908 [2024-07-12 19:28:34.985659] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:29.169 [2024-07-12 19:28:35.051486] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:29.169 [2024-07-12 19:28:35.051520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:29.169 [2024-07-12 19:28:35.051528] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:29.169 [2024-07-12 19:28:35.051534] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:29.169 [2024-07-12 19:28:35.051540] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:29.169 [2024-07-12 19:28:35.051685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:29.169 [2024-07-12 19:28:35.051817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:29.169 [2024-07-12 19:28:35.052248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:29.169 [2024-07-12 19:28:35.052347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:29.741 19:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:29.742 19:28:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:29.742 ************************************ 00:32:29.742 START TEST spdk_target_abort 00:32:29.742 ************************************ 00:32:29.742 19:28:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:29.742 19:28:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:29.742 19:28:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:29.742 19:28:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.742 19:28:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:30.003 spdk_targetn1 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:30.003 [2024-07-12 19:28:36.069445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:30.003 [2024-07-12 19:28:36.109701] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:30.003 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:30.004 19:28:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:30.265 EAL: No free 2048 kB hugepages reported on node 1 00:32:30.265 [2024-07-12 19:28:36.240643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:816 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:30.265 [2024-07-12 19:28:36.240671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0067 p:1 m:0 dnr:0 00:32:30.265 [2024-07-12 19:28:36.241063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:840 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:32:30.265 [2024-07-12 19:28:36.241074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:006b p:1 m:0 dnr:0 00:32:30.265 [2024-07-12 19:28:36.247544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:976 len:8 PRP1 0x2000078be000 PRP2 0x0 00:32:30.265 [2024-07-12 19:28:36.247558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:007b p:1 m:0 dnr:0 00:32:30.265 [2024-07-12 19:28:36.287639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:2352 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:30.265 [2024-07-12 19:28:36.287655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:30.265 [2024-07-12 19:28:36.296450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2640 len:8 PRP1 0x2000078be000 PRP2 0x0 00:32:30.265 [2024-07-12 19:28:36.296466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:33.570 Initializing NVMe Controllers 00:32:33.570 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:33.570 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:33.570 Initialization complete. Launching workers. 00:32:33.570 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11647, failed: 5 00:32:33.570 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3352, failed to submit 8300 00:32:33.570 success 771, unsuccess 2581, failed 0 00:32:33.570 19:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:33.570 19:28:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:33.570 EAL: No free 2048 kB hugepages reported on node 1 00:32:35.485 [2024-07-12 19:28:41.466450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:48968 len:8 PRP1 0x200007c58000 PRP2 0x0 00:32:35.485 [2024-07-12 19:28:41.466495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:00ee p:0 m:0 dnr:0 00:32:36.058 [2024-07-12 19:28:42.106156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:178 nsid:1 lba:63736 len:8 PRP1 0x200007c40000 PRP2 0x0 00:32:36.058 [2024-07-12 19:28:42.106190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:178 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:36.630 Initializing NVMe Controllers 00:32:36.630 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:36.630 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:36.630 Initialization complete. Launching workers. 00:32:36.630 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8670, failed: 2 00:32:36.630 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1225, failed to submit 7447 00:32:36.630 success 322, unsuccess 903, failed 0 00:32:36.630 19:28:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:36.630 19:28:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:36.630 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.889 [2024-07-12 19:28:42.908517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:171 nsid:1 lba:28784 len:8 PRP1 0x2000078d6000 PRP2 0x0 00:32:36.889 [2024-07-12 19:28:42.908543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:171 cdw0:0 sqhd:0098 p:0 m:0 dnr:0 00:32:40.224 Initializing NVMe Controllers 00:32:40.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:40.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:40.224 Initialization complete. Launching workers. 00:32:40.224 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 41455, failed: 1 00:32:40.224 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2656, failed to submit 38800 00:32:40.224 success 591, unsuccess 2065, failed 0 00:32:40.224 19:28:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:40.224 19:28:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.224 19:28:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:40.224 19:28:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.224 19:28:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:40.224 19:28:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.224 19:28:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1656063 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1656063 ']' 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1656063 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1656063 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1656063' 00:32:41.607 killing process with pid 1656063 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1656063 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1656063 00:32:41.607 00:32:41.607 real 0m11.968s 00:32:41.607 user 0m48.498s 00:32:41.607 sys 0m1.912s 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:41.607 19:28:47 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:41.607 ************************************ 00:32:41.607 END TEST spdk_target_abort 00:32:41.607 ************************************ 00:32:41.868 19:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:41.868 19:28:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:41.868 19:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:41.868 19:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:41.868 19:28:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:41.868 ************************************ 00:32:41.868 START TEST kernel_target_abort 00:32:41.868 ************************************ 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:41.868 19:28:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:45.173 Waiting for block devices as requested 00:32:45.173 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:45.173 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:45.173 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:45.434 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:45.434 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:45.434 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:45.434 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:45.694 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:45.694 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:45.955 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:45.955 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:45.955 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:46.215 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:46.215 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:46.215 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:46.215 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:46.475 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:46.735 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:46.735 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:46.735 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:46.735 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:46.735 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:46.736 No valid GPT data, bailing 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be --hostid=00d0226a-fbea-ec11-9bc7-a4bf019282be -a 10.0.0.1 -t tcp -s 4420 00:32:46.736 00:32:46.736 Discovery Log Number of Records 2, Generation counter 2 00:32:46.736 =====Discovery Log Entry 0====== 00:32:46.736 trtype: tcp 00:32:46.736 adrfam: ipv4 00:32:46.736 subtype: current discovery subsystem 00:32:46.736 treq: not specified, sq flow control disable supported 00:32:46.736 portid: 1 00:32:46.736 trsvcid: 4420 00:32:46.736 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:46.736 traddr: 10.0.0.1 00:32:46.736 eflags: none 00:32:46.736 sectype: none 00:32:46.736 =====Discovery Log Entry 1====== 00:32:46.736 trtype: tcp 00:32:46.736 adrfam: ipv4 00:32:46.736 subtype: nvme subsystem 00:32:46.736 treq: not specified, sq flow control disable supported 00:32:46.736 portid: 1 00:32:46.736 trsvcid: 4420 00:32:46.736 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:46.736 traddr: 10.0.0.1 00:32:46.736 eflags: none 00:32:46.736 sectype: none 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:46.736 19:28:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:47.021 EAL: No free 2048 kB hugepages reported on node 1 00:32:50.319 Initializing NVMe Controllers 00:32:50.319 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:50.319 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:50.319 Initialization complete. Launching workers. 00:32:50.319 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 51497, failed: 0 00:32:50.319 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 51497, failed to submit 0 00:32:50.319 success 0, unsuccess 51497, failed 0 00:32:50.319 19:28:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:50.319 19:28:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:50.319 EAL: No free 2048 kB hugepages reported on node 1 00:32:53.615 Initializing NVMe Controllers 00:32:53.615 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:53.615 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:53.615 Initialization complete. Launching workers. 00:32:53.615 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 91938, failed: 0 00:32:53.615 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 23174, failed to submit 68764 00:32:53.615 success 0, unsuccess 23174, failed 0 00:32:53.615 19:28:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:53.615 19:28:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.615 EAL: No free 2048 kB hugepages reported on node 1 00:32:56.161 Initializing NVMe Controllers 00:32:56.161 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:56.161 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:56.161 Initialization complete. Launching workers. 00:32:56.161 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 88682, failed: 0 00:32:56.161 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22138, failed to submit 66544 00:32:56.161 success 0, unsuccess 22138, failed 0 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:56.161 19:29:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:59.464 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:59.464 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:01.376 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:01.637 00:33:01.637 real 0m19.750s 00:33:01.637 user 0m8.288s 00:33:01.637 sys 0m6.161s 00:33:01.637 19:29:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:01.637 19:29:07 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:01.637 ************************************ 00:33:01.637 END TEST kernel_target_abort 00:33:01.637 ************************************ 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:01.637 rmmod nvme_tcp 00:33:01.637 rmmod nvme_fabrics 00:33:01.637 rmmod nvme_keyring 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1656063 ']' 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1656063 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1656063 ']' 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1656063 00:33:01.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1656063) - No such process 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1656063 is not found' 00:33:01.637 Process with pid 1656063 is not found 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:01.637 19:29:07 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:04.951 Waiting for block devices as requested 00:33:04.951 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:04.951 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:04.951 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:05.211 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:05.211 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:05.211 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:05.472 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:05.472 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:05.472 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:05.732 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:05.732 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:05.732 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:05.993 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:05.994 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:05.994 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:05.994 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:06.254 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:06.515 19:29:12 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:06.515 19:29:12 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:06.515 19:29:12 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:06.515 19:29:12 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:06.515 19:29:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:06.515 19:29:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:06.515 19:29:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.429 19:29:14 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:08.429 00:33:08.429 real 0m50.824s 00:33:08.429 user 1m2.012s 00:33:08.429 sys 0m18.628s 00:33:08.429 19:29:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:08.429 19:29:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:08.429 ************************************ 00:33:08.429 END TEST nvmf_abort_qd_sizes 00:33:08.429 ************************************ 00:33:08.429 19:29:14 -- common/autotest_common.sh@1142 -- # return 0 00:33:08.429 19:29:14 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:08.429 19:29:14 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:08.429 19:29:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:08.429 19:29:14 -- common/autotest_common.sh@10 -- # set +x 00:33:08.689 ************************************ 00:33:08.689 START TEST keyring_file 00:33:08.689 ************************************ 00:33:08.689 19:29:14 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:08.689 * Looking for test storage... 00:33:08.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:08.689 19:29:14 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:08.689 19:29:14 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:08.689 19:29:14 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.690 19:29:14 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.690 19:29:14 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.690 19:29:14 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.690 19:29:14 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.690 19:29:14 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.690 19:29:14 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.690 19:29:14 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:08.690 19:29:14 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jZYMp3PZ8Y 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jZYMp3PZ8Y 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jZYMp3PZ8Y 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.jZYMp3PZ8Y 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.HMxt72rusl 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:08.690 19:29:14 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.HMxt72rusl 00:33:08.690 19:29:14 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.HMxt72rusl 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.HMxt72rusl 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@30 -- # tgtpid=1666063 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1666063 00:33:08.690 19:29:14 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1666063 ']' 00:33:08.690 19:29:14 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:08.690 19:29:14 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:08.690 19:29:14 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:08.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:08.690 19:29:14 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:08.690 19:29:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:08.690 19:29:14 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:08.950 [2024-07-12 19:29:14.848219] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:33:08.950 [2024-07-12 19:29:14.848281] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666063 ] 00:33:08.950 EAL: No free 2048 kB hugepages reported on node 1 00:33:08.950 [2024-07-12 19:29:14.910923] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:08.950 [2024-07-12 19:29:14.982750] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:09.520 19:29:15 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:09.520 19:29:15 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:09.520 19:29:15 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:09.520 19:29:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.520 19:29:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.520 [2024-07-12 19:29:15.605873] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:09.520 null0 00:33:09.520 [2024-07-12 19:29:15.637916] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:09.520 [2024-07-12 19:29:15.638147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:09.520 [2024-07-12 19:29:15.645928] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.780 19:29:15 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.780 [2024-07-12 19:29:15.657963] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:09.780 request: 00:33:09.780 { 00:33:09.780 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:09.780 "secure_channel": false, 00:33:09.780 "listen_address": { 00:33:09.780 "trtype": "tcp", 00:33:09.780 "traddr": "127.0.0.1", 00:33:09.780 "trsvcid": "4420" 00:33:09.780 }, 00:33:09.780 "method": "nvmf_subsystem_add_listener", 00:33:09.780 "req_id": 1 00:33:09.780 } 00:33:09.780 Got JSON-RPC error response 00:33:09.780 response: 00:33:09.780 { 00:33:09.780 "code": -32602, 00:33:09.780 "message": "Invalid parameters" 00:33:09.780 } 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:09.780 19:29:15 keyring_file -- keyring/file.sh@46 -- # bperfpid=1666262 00:33:09.780 19:29:15 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1666262 /var/tmp/bperf.sock 00:33:09.780 19:29:15 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1666262 ']' 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:09.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:09.780 19:29:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:09.780 [2024-07-12 19:29:15.712724] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:33:09.780 [2024-07-12 19:29:15.712771] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1666262 ] 00:33:09.780 EAL: No free 2048 kB hugepages reported on node 1 00:33:09.780 [2024-07-12 19:29:15.786466] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:09.780 [2024-07-12 19:29:15.850432] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:10.351 19:29:16 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:10.351 19:29:16 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:10.351 19:29:16 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jZYMp3PZ8Y 00:33:10.351 19:29:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jZYMp3PZ8Y 00:33:10.612 19:29:16 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HMxt72rusl 00:33:10.612 19:29:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HMxt72rusl 00:33:10.873 19:29:16 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:10.873 19:29:16 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:10.873 19:29:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.873 19:29:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.873 19:29:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:10.873 19:29:16 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.jZYMp3PZ8Y == \/\t\m\p\/\t\m\p\.\j\Z\Y\M\p\3\P\Z\8\Y ]] 00:33:10.873 19:29:16 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:10.873 19:29:16 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:10.873 19:29:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:10.873 19:29:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:10.873 19:29:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.133 19:29:17 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.HMxt72rusl == \/\t\m\p\/\t\m\p\.\H\M\x\t\7\2\r\u\s\l ]] 00:33:11.133 19:29:17 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:11.133 19:29:17 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:11.133 19:29:17 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.133 19:29:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.393 19:29:17 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:11.393 19:29:17 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:11.393 19:29:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:11.653 [2024-07-12 19:29:17.546784] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:11.653 nvme0n1 00:33:11.653 19:29:17 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:11.653 19:29:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:11.653 19:29:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.653 19:29:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.653 19:29:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:11.653 19:29:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.913 19:29:17 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:11.913 19:29:17 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:11.913 19:29:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:11.913 19:29:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:11.913 19:29:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:11.913 19:29:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:11.913 19:29:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:11.913 19:29:17 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:11.913 19:29:17 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:11.913 Running I/O for 1 seconds... 00:33:13.294 00:33:13.294 Latency(us) 00:33:13.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:13.294 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:13.294 nvme0n1 : 1.02 7770.39 30.35 0.00 0.00 16343.35 4396.37 18022.40 00:33:13.294 =================================================================================================================== 00:33:13.294 Total : 7770.39 30.35 0.00 0.00 16343.35 4396.37 18022.40 00:33:13.294 0 00:33:13.294 19:29:19 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:13.294 19:29:19 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.294 19:29:19 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:13.294 19:29:19 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:13.294 19:29:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.591 19:29:19 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:13.591 19:29:19 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.591 19:29:19 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:13.591 19:29:19 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.591 19:29:19 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:13.591 19:29:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.591 19:29:19 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:13.591 19:29:19 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:13.591 19:29:19 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.591 19:29:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:13.863 [2024-07-12 19:29:19.718542] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:13.863 [2024-07-12 19:29:19.718771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e8170 (107): Transport endpoint is not connected 00:33:13.863 [2024-07-12 19:29:19.719767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e8170 (9): Bad file descriptor 00:33:13.863 [2024-07-12 19:29:19.720769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:13.863 [2024-07-12 19:29:19.720775] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:13.863 [2024-07-12 19:29:19.720781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:13.863 request: 00:33:13.863 { 00:33:13.863 "name": "nvme0", 00:33:13.863 "trtype": "tcp", 00:33:13.863 "traddr": "127.0.0.1", 00:33:13.863 "adrfam": "ipv4", 00:33:13.863 "trsvcid": "4420", 00:33:13.863 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:13.863 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:13.863 "prchk_reftag": false, 00:33:13.863 "prchk_guard": false, 00:33:13.863 "hdgst": false, 00:33:13.863 "ddgst": false, 00:33:13.863 "psk": "key1", 00:33:13.863 "method": "bdev_nvme_attach_controller", 00:33:13.863 "req_id": 1 00:33:13.863 } 00:33:13.863 Got JSON-RPC error response 00:33:13.863 response: 00:33:13.863 { 00:33:13.863 "code": -5, 00:33:13.863 "message": "Input/output error" 00:33:13.863 } 00:33:13.863 19:29:19 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:13.863 19:29:19 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:13.863 19:29:19 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:13.863 19:29:19 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:13.863 19:29:19 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:13.863 19:29:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:13.863 19:29:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.863 19:29:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.863 19:29:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:13.863 19:29:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.863 19:29:19 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:13.863 19:29:19 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:13.863 19:29:19 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:13.864 19:29:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:13.864 19:29:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:13.864 19:29:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:13.864 19:29:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:14.128 19:29:20 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:14.128 19:29:20 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:14.128 19:29:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:14.128 19:29:20 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:14.128 19:29:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:14.388 19:29:20 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:14.388 19:29:20 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:14.388 19:29:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.648 19:29:20 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:14.648 19:29:20 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.jZYMp3PZ8Y 00:33:14.648 19:29:20 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.jZYMp3PZ8Y 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.jZYMp3PZ8Y 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jZYMp3PZ8Y 00:33:14.648 19:29:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jZYMp3PZ8Y 00:33:14.648 [2024-07-12 19:29:20.696617] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jZYMp3PZ8Y': 0100660 00:33:14.648 [2024-07-12 19:29:20.696636] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:14.648 request: 00:33:14.648 { 00:33:14.648 "name": "key0", 00:33:14.648 "path": "/tmp/tmp.jZYMp3PZ8Y", 00:33:14.648 "method": "keyring_file_add_key", 00:33:14.648 "req_id": 1 00:33:14.648 } 00:33:14.648 Got JSON-RPC error response 00:33:14.648 response: 00:33:14.648 { 00:33:14.648 "code": -1, 00:33:14.648 "message": "Operation not permitted" 00:33:14.648 } 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:14.648 19:29:20 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:14.648 19:29:20 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.jZYMp3PZ8Y 00:33:14.648 19:29:20 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jZYMp3PZ8Y 00:33:14.649 19:29:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jZYMp3PZ8Y 00:33:14.908 19:29:20 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.jZYMp3PZ8Y 00:33:14.908 19:29:20 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:14.908 19:29:20 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:14.908 19:29:20 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:14.908 19:29:20 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:14.908 19:29:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:14.908 19:29:20 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:15.168 19:29:21 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:15.168 19:29:21 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.168 19:29:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.168 [2024-07-12 19:29:21.205907] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.jZYMp3PZ8Y': No such file or directory 00:33:15.168 [2024-07-12 19:29:21.205923] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:15.168 [2024-07-12 19:29:21.205939] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:15.168 [2024-07-12 19:29:21.205944] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:15.168 [2024-07-12 19:29:21.205949] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:15.168 request: 00:33:15.168 { 00:33:15.168 "name": "nvme0", 00:33:15.168 "trtype": "tcp", 00:33:15.168 "traddr": "127.0.0.1", 00:33:15.168 "adrfam": "ipv4", 00:33:15.168 "trsvcid": "4420", 00:33:15.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.168 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:15.168 "prchk_reftag": false, 00:33:15.168 "prchk_guard": false, 00:33:15.168 "hdgst": false, 00:33:15.168 "ddgst": false, 00:33:15.168 "psk": "key0", 00:33:15.168 "method": "bdev_nvme_attach_controller", 00:33:15.168 "req_id": 1 00:33:15.168 } 00:33:15.168 Got JSON-RPC error response 00:33:15.168 response: 00:33:15.168 { 00:33:15.168 "code": -19, 00:33:15.168 "message": "No such device" 00:33:15.168 } 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:15.168 19:29:21 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:15.168 19:29:21 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:15.168 19:29:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:15.429 19:29:21 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.izotCvqfGh 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:15.429 19:29:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:15.429 19:29:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:15.429 19:29:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:15.429 19:29:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:15.429 19:29:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:15.429 19:29:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.izotCvqfGh 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.izotCvqfGh 00:33:15.429 19:29:21 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.izotCvqfGh 00:33:15.429 19:29:21 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.izotCvqfGh 00:33:15.429 19:29:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.izotCvqfGh 00:33:15.689 19:29:21 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.689 19:29:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:15.689 nvme0n1 00:33:15.689 19:29:21 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:15.689 19:29:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:15.689 19:29:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:15.689 19:29:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:15.689 19:29:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:15.689 19:29:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:15.950 19:29:21 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:15.950 19:29:21 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:15.950 19:29:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:16.210 19:29:22 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:16.210 19:29:22 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:16.210 19:29:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.210 19:29:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.210 19:29:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.210 19:29:22 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:16.210 19:29:22 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:16.210 19:29:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:16.210 19:29:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.210 19:29:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.210 19:29:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.210 19:29:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.470 19:29:22 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:16.470 19:29:22 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:16.470 19:29:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:16.731 19:29:22 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:16.731 19:29:22 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:16.731 19:29:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.731 19:29:22 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:16.731 19:29:22 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.izotCvqfGh 00:33:16.731 19:29:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.izotCvqfGh 00:33:16.991 19:29:22 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.HMxt72rusl 00:33:16.991 19:29:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.HMxt72rusl 00:33:16.991 19:29:23 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:16.991 19:29:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.252 nvme0n1 00:33:17.252 19:29:23 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:17.252 19:29:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:17.513 19:29:23 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:17.513 "subsystems": [ 00:33:17.513 { 00:33:17.513 "subsystem": "keyring", 00:33:17.513 "config": [ 00:33:17.513 { 00:33:17.513 "method": "keyring_file_add_key", 00:33:17.513 "params": { 00:33:17.513 "name": "key0", 00:33:17.513 "path": "/tmp/tmp.izotCvqfGh" 00:33:17.513 } 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "method": "keyring_file_add_key", 00:33:17.513 "params": { 00:33:17.513 "name": "key1", 00:33:17.513 "path": "/tmp/tmp.HMxt72rusl" 00:33:17.513 } 00:33:17.513 } 00:33:17.513 ] 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "subsystem": "iobuf", 00:33:17.513 "config": [ 00:33:17.513 { 00:33:17.513 "method": "iobuf_set_options", 00:33:17.513 "params": { 00:33:17.513 "small_pool_count": 8192, 00:33:17.513 "large_pool_count": 1024, 00:33:17.513 "small_bufsize": 8192, 00:33:17.513 "large_bufsize": 135168 00:33:17.513 } 00:33:17.513 } 00:33:17.513 ] 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "subsystem": "sock", 00:33:17.513 "config": [ 00:33:17.513 { 00:33:17.513 "method": "sock_set_default_impl", 00:33:17.513 "params": { 00:33:17.513 "impl_name": "posix" 00:33:17.513 } 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "method": "sock_impl_set_options", 00:33:17.513 "params": { 00:33:17.513 "impl_name": "ssl", 00:33:17.513 "recv_buf_size": 4096, 00:33:17.513 "send_buf_size": 4096, 00:33:17.513 "enable_recv_pipe": true, 00:33:17.513 "enable_quickack": false, 00:33:17.513 "enable_placement_id": 0, 00:33:17.513 "enable_zerocopy_send_server": true, 00:33:17.513 "enable_zerocopy_send_client": false, 00:33:17.513 "zerocopy_threshold": 0, 00:33:17.513 "tls_version": 0, 00:33:17.513 "enable_ktls": false 00:33:17.513 } 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "method": "sock_impl_set_options", 00:33:17.513 "params": { 00:33:17.513 "impl_name": "posix", 00:33:17.513 "recv_buf_size": 2097152, 00:33:17.513 "send_buf_size": 2097152, 00:33:17.513 "enable_recv_pipe": true, 00:33:17.513 "enable_quickack": false, 00:33:17.513 "enable_placement_id": 0, 00:33:17.513 "enable_zerocopy_send_server": true, 00:33:17.513 "enable_zerocopy_send_client": false, 00:33:17.513 "zerocopy_threshold": 0, 00:33:17.513 "tls_version": 0, 00:33:17.513 "enable_ktls": false 00:33:17.513 } 00:33:17.513 } 00:33:17.513 ] 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "subsystem": "vmd", 00:33:17.513 "config": [] 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "subsystem": "accel", 00:33:17.513 "config": [ 00:33:17.513 { 00:33:17.513 "method": "accel_set_options", 00:33:17.513 "params": { 00:33:17.513 "small_cache_size": 128, 00:33:17.513 "large_cache_size": 16, 00:33:17.513 "task_count": 2048, 00:33:17.513 "sequence_count": 2048, 00:33:17.513 "buf_count": 2048 00:33:17.513 } 00:33:17.513 } 00:33:17.513 ] 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "subsystem": "bdev", 00:33:17.513 "config": [ 00:33:17.513 { 00:33:17.513 "method": "bdev_set_options", 00:33:17.513 "params": { 00:33:17.513 "bdev_io_pool_size": 65535, 00:33:17.513 "bdev_io_cache_size": 256, 00:33:17.513 "bdev_auto_examine": true, 00:33:17.513 "iobuf_small_cache_size": 128, 00:33:17.513 "iobuf_large_cache_size": 16 00:33:17.513 } 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "method": "bdev_raid_set_options", 00:33:17.513 "params": { 00:33:17.513 "process_window_size_kb": 1024 00:33:17.513 } 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "method": "bdev_iscsi_set_options", 00:33:17.513 "params": { 00:33:17.513 "timeout_sec": 30 00:33:17.513 } 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "method": "bdev_nvme_set_options", 00:33:17.513 "params": { 00:33:17.513 "action_on_timeout": "none", 00:33:17.513 "timeout_us": 0, 00:33:17.513 "timeout_admin_us": 0, 00:33:17.513 "keep_alive_timeout_ms": 10000, 00:33:17.513 "arbitration_burst": 0, 00:33:17.513 "low_priority_weight": 0, 00:33:17.513 "medium_priority_weight": 0, 00:33:17.513 "high_priority_weight": 0, 00:33:17.513 "nvme_adminq_poll_period_us": 10000, 00:33:17.513 "nvme_ioq_poll_period_us": 0, 00:33:17.513 "io_queue_requests": 512, 00:33:17.513 "delay_cmd_submit": true, 00:33:17.513 "transport_retry_count": 4, 00:33:17.513 "bdev_retry_count": 3, 00:33:17.513 "transport_ack_timeout": 0, 00:33:17.513 "ctrlr_loss_timeout_sec": 0, 00:33:17.513 "reconnect_delay_sec": 0, 00:33:17.513 "fast_io_fail_timeout_sec": 0, 00:33:17.513 "disable_auto_failback": false, 00:33:17.513 "generate_uuids": false, 00:33:17.513 "transport_tos": 0, 00:33:17.513 "nvme_error_stat": false, 00:33:17.513 "rdma_srq_size": 0, 00:33:17.513 "io_path_stat": false, 00:33:17.513 "allow_accel_sequence": false, 00:33:17.513 "rdma_max_cq_size": 0, 00:33:17.513 "rdma_cm_event_timeout_ms": 0, 00:33:17.513 "dhchap_digests": [ 00:33:17.513 "sha256", 00:33:17.513 "sha384", 00:33:17.513 "sha512" 00:33:17.513 ], 00:33:17.513 "dhchap_dhgroups": [ 00:33:17.513 "null", 00:33:17.513 "ffdhe2048", 00:33:17.513 "ffdhe3072", 00:33:17.513 "ffdhe4096", 00:33:17.513 "ffdhe6144", 00:33:17.513 "ffdhe8192" 00:33:17.513 ] 00:33:17.513 } 00:33:17.513 }, 00:33:17.513 { 00:33:17.513 "method": "bdev_nvme_attach_controller", 00:33:17.513 "params": { 00:33:17.513 "name": "nvme0", 00:33:17.513 "trtype": "TCP", 00:33:17.513 "adrfam": "IPv4", 00:33:17.513 "traddr": "127.0.0.1", 00:33:17.513 "trsvcid": "4420", 00:33:17.514 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.514 "prchk_reftag": false, 00:33:17.514 "prchk_guard": false, 00:33:17.514 "ctrlr_loss_timeout_sec": 0, 00:33:17.514 "reconnect_delay_sec": 0, 00:33:17.514 "fast_io_fail_timeout_sec": 0, 00:33:17.514 "psk": "key0", 00:33:17.514 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.514 "hdgst": false, 00:33:17.514 "ddgst": false 00:33:17.514 } 00:33:17.514 }, 00:33:17.514 { 00:33:17.514 "method": "bdev_nvme_set_hotplug", 00:33:17.514 "params": { 00:33:17.514 "period_us": 100000, 00:33:17.514 "enable": false 00:33:17.514 } 00:33:17.514 }, 00:33:17.514 { 00:33:17.514 "method": "bdev_wait_for_examine" 00:33:17.514 } 00:33:17.514 ] 00:33:17.514 }, 00:33:17.514 { 00:33:17.514 "subsystem": "nbd", 00:33:17.514 "config": [] 00:33:17.514 } 00:33:17.514 ] 00:33:17.514 }' 00:33:17.514 19:29:23 keyring_file -- keyring/file.sh@114 -- # killprocess 1666262 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1666262 ']' 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1666262 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1666262 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1666262' 00:33:17.514 killing process with pid 1666262 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@967 -- # kill 1666262 00:33:17.514 Received shutdown signal, test time was about 1.000000 seconds 00:33:17.514 00:33:17.514 Latency(us) 00:33:17.514 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:17.514 =================================================================================================================== 00:33:17.514 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:17.514 19:29:23 keyring_file -- common/autotest_common.sh@972 -- # wait 1666262 00:33:17.774 19:29:23 keyring_file -- keyring/file.sh@117 -- # bperfpid=1667861 00:33:17.774 19:29:23 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1667861 /var/tmp/bperf.sock 00:33:17.774 19:29:23 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1667861 ']' 00:33:17.774 19:29:23 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.774 19:29:23 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:17.774 19:29:23 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:17.774 19:29:23 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.774 19:29:23 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:17.774 19:29:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.774 19:29:23 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:17.774 "subsystems": [ 00:33:17.774 { 00:33:17.774 "subsystem": "keyring", 00:33:17.774 "config": [ 00:33:17.774 { 00:33:17.774 "method": "keyring_file_add_key", 00:33:17.774 "params": { 00:33:17.774 "name": "key0", 00:33:17.774 "path": "/tmp/tmp.izotCvqfGh" 00:33:17.774 } 00:33:17.774 }, 00:33:17.774 { 00:33:17.774 "method": "keyring_file_add_key", 00:33:17.774 "params": { 00:33:17.774 "name": "key1", 00:33:17.774 "path": "/tmp/tmp.HMxt72rusl" 00:33:17.774 } 00:33:17.774 } 00:33:17.774 ] 00:33:17.774 }, 00:33:17.774 { 00:33:17.774 "subsystem": "iobuf", 00:33:17.774 "config": [ 00:33:17.774 { 00:33:17.774 "method": "iobuf_set_options", 00:33:17.774 "params": { 00:33:17.774 "small_pool_count": 8192, 00:33:17.774 "large_pool_count": 1024, 00:33:17.774 "small_bufsize": 8192, 00:33:17.774 "large_bufsize": 135168 00:33:17.774 } 00:33:17.774 } 00:33:17.774 ] 00:33:17.774 }, 00:33:17.774 { 00:33:17.774 "subsystem": "sock", 00:33:17.774 "config": [ 00:33:17.774 { 00:33:17.774 "method": "sock_set_default_impl", 00:33:17.774 "params": { 00:33:17.774 "impl_name": "posix" 00:33:17.774 } 00:33:17.774 }, 00:33:17.774 { 00:33:17.774 "method": "sock_impl_set_options", 00:33:17.774 "params": { 00:33:17.774 "impl_name": "ssl", 00:33:17.774 "recv_buf_size": 4096, 00:33:17.774 "send_buf_size": 4096, 00:33:17.774 "enable_recv_pipe": true, 00:33:17.774 "enable_quickack": false, 00:33:17.774 "enable_placement_id": 0, 00:33:17.774 "enable_zerocopy_send_server": true, 00:33:17.774 "enable_zerocopy_send_client": false, 00:33:17.774 "zerocopy_threshold": 0, 00:33:17.774 "tls_version": 0, 00:33:17.774 "enable_ktls": false 00:33:17.774 } 00:33:17.774 }, 00:33:17.774 { 00:33:17.774 "method": "sock_impl_set_options", 00:33:17.774 "params": { 00:33:17.774 "impl_name": "posix", 00:33:17.774 "recv_buf_size": 2097152, 00:33:17.774 "send_buf_size": 2097152, 00:33:17.774 "enable_recv_pipe": true, 00:33:17.774 "enable_quickack": false, 00:33:17.774 "enable_placement_id": 0, 00:33:17.774 "enable_zerocopy_send_server": true, 00:33:17.774 "enable_zerocopy_send_client": false, 00:33:17.774 "zerocopy_threshold": 0, 00:33:17.774 "tls_version": 0, 00:33:17.774 "enable_ktls": false 00:33:17.774 } 00:33:17.774 } 00:33:17.774 ] 00:33:17.774 }, 00:33:17.774 { 00:33:17.774 "subsystem": "vmd", 00:33:17.774 "config": [] 00:33:17.774 }, 00:33:17.774 { 00:33:17.774 "subsystem": "accel", 00:33:17.774 "config": [ 00:33:17.774 { 00:33:17.774 "method": "accel_set_options", 00:33:17.774 "params": { 00:33:17.774 "small_cache_size": 128, 00:33:17.774 "large_cache_size": 16, 00:33:17.774 "task_count": 2048, 00:33:17.775 "sequence_count": 2048, 00:33:17.775 "buf_count": 2048 00:33:17.775 } 00:33:17.775 } 00:33:17.775 ] 00:33:17.775 }, 00:33:17.775 { 00:33:17.775 "subsystem": "bdev", 00:33:17.775 "config": [ 00:33:17.775 { 00:33:17.775 "method": "bdev_set_options", 00:33:17.775 "params": { 00:33:17.775 "bdev_io_pool_size": 65535, 00:33:17.775 "bdev_io_cache_size": 256, 00:33:17.775 "bdev_auto_examine": true, 00:33:17.775 "iobuf_small_cache_size": 128, 00:33:17.775 "iobuf_large_cache_size": 16 00:33:17.775 } 00:33:17.775 }, 00:33:17.775 { 00:33:17.775 "method": "bdev_raid_set_options", 00:33:17.775 "params": { 00:33:17.775 "process_window_size_kb": 1024 00:33:17.775 } 00:33:17.775 }, 00:33:17.775 { 00:33:17.775 "method": "bdev_iscsi_set_options", 00:33:17.775 "params": { 00:33:17.775 "timeout_sec": 30 00:33:17.775 } 00:33:17.775 }, 00:33:17.775 { 00:33:17.775 "method": "bdev_nvme_set_options", 00:33:17.775 "params": { 00:33:17.775 "action_on_timeout": "none", 00:33:17.775 "timeout_us": 0, 00:33:17.775 "timeout_admin_us": 0, 00:33:17.775 "keep_alive_timeout_ms": 10000, 00:33:17.775 "arbitration_burst": 0, 00:33:17.775 "low_priority_weight": 0, 00:33:17.775 "medium_priority_weight": 0, 00:33:17.775 "high_priority_weight": 0, 00:33:17.775 "nvme_adminq_poll_period_us": 10000, 00:33:17.775 "nvme_ioq_poll_period_us": 0, 00:33:17.775 "io_queue_requests": 512, 00:33:17.775 "delay_cmd_submit": true, 00:33:17.775 "transport_retry_count": 4, 00:33:17.775 "bdev_retry_count": 3, 00:33:17.775 "transport_ack_timeout": 0, 00:33:17.775 "ctrlr_loss_timeout_sec": 0, 00:33:17.775 "reconnect_delay_sec": 0, 00:33:17.775 "fast_io_fail_timeout_sec": 0, 00:33:17.775 "disable_auto_failback": false, 00:33:17.775 "generate_uuids": false, 00:33:17.775 "transport_tos": 0, 00:33:17.775 "nvme_error_stat": false, 00:33:17.775 "rdma_srq_size": 0, 00:33:17.775 "io_path_stat": false, 00:33:17.775 "allow_accel_sequence": false, 00:33:17.775 "rdma_max_cq_size": 0, 00:33:17.775 "rdma_cm_event_timeout_ms": 0, 00:33:17.775 "dhchap_digests": [ 00:33:17.775 "sha256", 00:33:17.775 "sha384", 00:33:17.775 "sha512" 00:33:17.775 ], 00:33:17.775 "dhchap_dhgroups": [ 00:33:17.775 "null", 00:33:17.775 "ffdhe2048", 00:33:17.775 "ffdhe3072", 00:33:17.775 "ffdhe4096", 00:33:17.775 "ffdhe6144", 00:33:17.775 "ffdhe8192" 00:33:17.775 ] 00:33:17.775 } 00:33:17.775 }, 00:33:17.775 { 00:33:17.775 "method": "bdev_nvme_attach_controller", 00:33:17.775 "params": { 00:33:17.775 "name": "nvme0", 00:33:17.775 "trtype": "TCP", 00:33:17.775 "adrfam": "IPv4", 00:33:17.775 "traddr": "127.0.0.1", 00:33:17.775 "trsvcid": "4420", 00:33:17.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.775 "prchk_reftag": false, 00:33:17.775 "prchk_guard": false, 00:33:17.775 "ctrlr_loss_timeout_sec": 0, 00:33:17.775 "reconnect_delay_sec": 0, 00:33:17.775 "fast_io_fail_timeout_sec": 0, 00:33:17.775 "psk": "key0", 00:33:17.775 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:17.775 "hdgst": false, 00:33:17.775 "ddgst": false 00:33:17.775 } 00:33:17.775 }, 00:33:17.775 { 00:33:17.775 "method": "bdev_nvme_set_hotplug", 00:33:17.775 "params": { 00:33:17.775 "period_us": 100000, 00:33:17.775 "enable": false 00:33:17.775 } 00:33:17.775 }, 00:33:17.775 { 00:33:17.775 "method": "bdev_wait_for_examine" 00:33:17.775 } 00:33:17.775 ] 00:33:17.775 }, 00:33:17.775 { 00:33:17.775 "subsystem": "nbd", 00:33:17.775 "config": [] 00:33:17.775 } 00:33:17.775 ] 00:33:17.775 }' 00:33:17.775 [2024-07-12 19:29:23.728595] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:33:17.775 [2024-07-12 19:29:23.728670] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1667861 ] 00:33:17.775 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.775 [2024-07-12 19:29:23.809029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.775 [2024-07-12 19:29:23.862537] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.035 [2024-07-12 19:29:24.004108] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:18.605 19:29:24 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:18.605 19:29:24 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:18.605 19:29:24 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:18.605 19:29:24 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:18.605 19:29:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.605 19:29:24 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:18.605 19:29:24 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:18.605 19:29:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:18.605 19:29:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.605 19:29:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.605 19:29:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.605 19:29:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:18.865 19:29:24 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:18.865 19:29:24 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:18.865 19:29:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:18.865 19:29:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.865 19:29:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.865 19:29:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.865 19:29:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:18.865 19:29:24 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:18.865 19:29:24 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:18.865 19:29:24 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:18.865 19:29:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:19.126 19:29:25 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:19.126 19:29:25 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:19.126 19:29:25 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.izotCvqfGh /tmp/tmp.HMxt72rusl 00:33:19.126 19:29:25 keyring_file -- keyring/file.sh@20 -- # killprocess 1667861 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1667861 ']' 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1667861 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1667861 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1667861' 00:33:19.126 killing process with pid 1667861 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@967 -- # kill 1667861 00:33:19.126 Received shutdown signal, test time was about 1.000000 seconds 00:33:19.126 00:33:19.126 Latency(us) 00:33:19.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.126 =================================================================================================================== 00:33:19.126 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:19.126 19:29:25 keyring_file -- common/autotest_common.sh@972 -- # wait 1667861 00:33:19.386 19:29:25 keyring_file -- keyring/file.sh@21 -- # killprocess 1666063 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1666063 ']' 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1666063 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1666063 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1666063' 00:33:19.386 killing process with pid 1666063 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@967 -- # kill 1666063 00:33:19.386 [2024-07-12 19:29:25.351766] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:19.386 19:29:25 keyring_file -- common/autotest_common.sh@972 -- # wait 1666063 00:33:19.648 00:33:19.648 real 0m10.977s 00:33:19.648 user 0m25.840s 00:33:19.648 sys 0m2.557s 00:33:19.648 19:29:25 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:19.648 19:29:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:19.648 ************************************ 00:33:19.648 END TEST keyring_file 00:33:19.648 ************************************ 00:33:19.648 19:29:25 -- common/autotest_common.sh@1142 -- # return 0 00:33:19.648 19:29:25 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:19.648 19:29:25 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:19.648 19:29:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:19.648 19:29:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:19.648 19:29:25 -- common/autotest_common.sh@10 -- # set +x 00:33:19.648 ************************************ 00:33:19.648 START TEST keyring_linux 00:33:19.648 ************************************ 00:33:19.648 19:29:25 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:19.648 * Looking for test storage... 00:33:19.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:19.648 19:29:25 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:19.648 19:29:25 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00d0226a-fbea-ec11-9bc7-a4bf019282be 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.648 19:29:25 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.648 19:29:25 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.648 19:29:25 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.648 19:29:25 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.648 19:29:25 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.648 19:29:25 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.648 19:29:25 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:19.648 19:29:25 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:19.648 19:29:25 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:19.648 19:29:25 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:19.648 19:29:25 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:19.648 19:29:25 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:19.648 19:29:25 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:19.648 19:29:25 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:19.648 19:29:25 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:19.648 19:29:25 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:19.648 19:29:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:19.648 19:29:25 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:19.648 19:29:25 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:19.648 19:29:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:19.648 19:29:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:19.648 19:29:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:19.909 19:29:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:19.910 /tmp/:spdk-test:key0 00:33:19.910 19:29:25 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:19.910 19:29:25 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:19.910 19:29:25 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:19.910 /tmp/:spdk-test:key1 00:33:19.910 19:29:25 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1668470 00:33:19.910 19:29:25 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1668470 00:33:19.910 19:29:25 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:19.910 19:29:25 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1668470 ']' 00:33:19.910 19:29:25 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:19.910 19:29:25 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:19.910 19:29:25 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:19.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:19.910 19:29:25 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:19.910 19:29:25 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:19.910 [2024-07-12 19:29:25.938433] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:33:19.910 [2024-07-12 19:29:25.938490] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668470 ] 00:33:19.910 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.910 [2024-07-12 19:29:25.997985] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:20.170 [2024-07-12 19:29:26.063468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:20.741 19:29:26 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:20.741 [2024-07-12 19:29:26.704226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.741 null0 00:33:20.741 [2024-07-12 19:29:26.736267] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:20.741 [2024-07-12 19:29:26.736650] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.741 19:29:26 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:20.741 782456712 00:33:20.741 19:29:26 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:20.741 1059439029 00:33:20.741 19:29:26 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1668514 00:33:20.741 19:29:26 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1668514 /var/tmp/bperf.sock 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1668514 ']' 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:20.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:20.741 19:29:26 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:20.741 19:29:26 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:20.741 [2024-07-12 19:29:26.808863] Starting SPDK v24.09-pre git sha1 2945695e6 / DPDK 24.03.0 initialization... 00:33:20.741 [2024-07-12 19:29:26.808910] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1668514 ] 00:33:20.741 EAL: No free 2048 kB hugepages reported on node 1 00:33:21.001 [2024-07-12 19:29:26.881002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:21.001 [2024-07-12 19:29:26.934636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:21.572 19:29:27 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:21.572 19:29:27 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:21.572 19:29:27 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:21.572 19:29:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:21.831 19:29:27 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:21.831 19:29:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:21.832 19:29:27 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:21.832 19:29:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:22.091 [2024-07-12 19:29:28.061128] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:22.091 nvme0n1 00:33:22.091 19:29:28 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:22.091 19:29:28 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:22.091 19:29:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:22.091 19:29:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:22.091 19:29:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:22.091 19:29:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.352 19:29:28 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:22.352 19:29:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:22.352 19:29:28 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:22.352 19:29:28 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:22.352 19:29:28 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.352 19:29:28 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:22.352 19:29:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.613 19:29:28 keyring_linux -- keyring/linux.sh@25 -- # sn=782456712 00:33:22.613 19:29:28 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:22.613 19:29:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:22.613 19:29:28 keyring_linux -- keyring/linux.sh@26 -- # [[ 782456712 == \7\8\2\4\5\6\7\1\2 ]] 00:33:22.613 19:29:28 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 782456712 00:33:22.613 19:29:28 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:22.613 19:29:28 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:22.613 Running I/O for 1 seconds... 00:33:23.554 00:33:23.554 Latency(us) 00:33:23.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:23.554 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:23.554 nvme0n1 : 1.02 8014.29 31.31 0.00 0.00 15835.12 5816.32 16820.91 00:33:23.554 =================================================================================================================== 00:33:23.554 Total : 8014.29 31.31 0.00 0.00 15835.12 5816.32 16820.91 00:33:23.554 0 00:33:23.554 19:29:29 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:23.554 19:29:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:23.814 19:29:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:23.814 19:29:29 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:23.814 19:29:29 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:23.814 19:29:29 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:23.814 19:29:29 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:23.814 19:29:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:23.814 19:29:29 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:23.814 19:29:29 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:23.814 19:29:29 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:23.814 19:29:29 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:24.075 [2024-07-12 19:29:30.071617] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:24.075 [2024-07-12 19:29:30.071680] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ec0f0 (107): Transport endpoint is not connected 00:33:24.075 [2024-07-12 19:29:30.072676] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17ec0f0 (9): Bad file descriptor 00:33:24.075 [2024-07-12 19:29:30.073678] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:24.075 [2024-07-12 19:29:30.073685] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:24.075 [2024-07-12 19:29:30.073692] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:24.075 request: 00:33:24.075 { 00:33:24.075 "name": "nvme0", 00:33:24.075 "trtype": "tcp", 00:33:24.075 "traddr": "127.0.0.1", 00:33:24.075 "adrfam": "ipv4", 00:33:24.075 "trsvcid": "4420", 00:33:24.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:24.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:24.075 "prchk_reftag": false, 00:33:24.075 "prchk_guard": false, 00:33:24.075 "hdgst": false, 00:33:24.075 "ddgst": false, 00:33:24.075 "psk": ":spdk-test:key1", 00:33:24.075 "method": "bdev_nvme_attach_controller", 00:33:24.075 "req_id": 1 00:33:24.075 } 00:33:24.075 Got JSON-RPC error response 00:33:24.075 response: 00:33:24.075 { 00:33:24.075 "code": -5, 00:33:24.075 "message": "Input/output error" 00:33:24.075 } 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@33 -- # sn=782456712 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 782456712 00:33:24.075 1 links removed 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@33 -- # sn=1059439029 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1059439029 00:33:24.075 1 links removed 00:33:24.075 19:29:30 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1668514 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1668514 ']' 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1668514 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668514 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668514' 00:33:24.075 killing process with pid 1668514 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 1668514 00:33:24.075 Received shutdown signal, test time was about 1.000000 seconds 00:33:24.075 00:33:24.075 Latency(us) 00:33:24.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:24.075 =================================================================================================================== 00:33:24.075 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:24.075 19:29:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 1668514 00:33:24.335 19:29:30 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1668470 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1668470 ']' 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1668470 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1668470 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1668470' 00:33:24.335 killing process with pid 1668470 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@967 -- # kill 1668470 00:33:24.335 19:29:30 keyring_linux -- common/autotest_common.sh@972 -- # wait 1668470 00:33:24.597 00:33:24.597 real 0m4.897s 00:33:24.597 user 0m8.352s 00:33:24.597 sys 0m1.415s 00:33:24.597 19:29:30 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:24.597 19:29:30 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:24.597 ************************************ 00:33:24.597 END TEST keyring_linux 00:33:24.597 ************************************ 00:33:24.597 19:29:30 -- common/autotest_common.sh@1142 -- # return 0 00:33:24.597 19:29:30 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:24.597 19:29:30 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:24.597 19:29:30 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:24.597 19:29:30 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:24.597 19:29:30 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:24.597 19:29:30 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:24.597 19:29:30 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:24.597 19:29:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:24.597 19:29:30 -- common/autotest_common.sh@10 -- # set +x 00:33:24.597 19:29:30 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:24.597 19:29:30 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:24.597 19:29:30 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:24.597 19:29:30 -- common/autotest_common.sh@10 -- # set +x 00:33:32.741 INFO: APP EXITING 00:33:32.741 INFO: killing all VMs 00:33:32.741 INFO: killing vhost app 00:33:32.741 WARN: no vhost pid file found 00:33:32.741 INFO: EXIT DONE 00:33:36.045 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:36.045 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:36.045 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:40.254 Cleaning 00:33:40.254 Removing: /var/run/dpdk/spdk0/config 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:40.254 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:40.254 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:40.254 Removing: /var/run/dpdk/spdk1/config 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:40.254 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:40.254 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:40.254 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:40.254 Removing: /var/run/dpdk/spdk2/config 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:40.254 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:40.254 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:40.254 Removing: /var/run/dpdk/spdk3/config 00:33:40.254 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:40.254 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:40.254 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:40.254 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:40.254 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:40.254 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:40.254 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:40.255 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:40.255 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:40.255 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:40.255 Removing: /var/run/dpdk/spdk4/config 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:40.255 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:40.255 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:40.255 Removing: /dev/shm/bdev_svc_trace.1 00:33:40.255 Removing: /dev/shm/nvmf_trace.0 00:33:40.255 Removing: /dev/shm/spdk_tgt_trace.pid1211425 00:33:40.255 Removing: /var/run/dpdk/spdk0 00:33:40.255 Removing: /var/run/dpdk/spdk1 00:33:40.255 Removing: /var/run/dpdk/spdk2 00:33:40.255 Removing: /var/run/dpdk/spdk3 00:33:40.255 Removing: /var/run/dpdk/spdk4 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1209830 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1211425 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1211953 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1213091 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1213329 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1214557 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1214729 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1215024 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1215977 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1216748 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1217121 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1217390 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1217675 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1218001 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1218353 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1218712 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1218987 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1220158 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1223650 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1224233 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1224709 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1224978 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1225416 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1225460 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1225993 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1226129 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1226507 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1226534 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1226887 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1226946 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1227527 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1227710 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1228084 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1228450 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1228479 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1228676 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1228891 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1229246 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1229596 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1229943 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1230151 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1230355 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1230682 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1231037 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1231384 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1231621 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1231818 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1232125 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1232472 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1232830 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1233120 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1233312 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1233569 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1233922 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1234277 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1234627 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1234705 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1235108 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1239562 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1292686 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1297727 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1309683 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1315895 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1320815 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1321490 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1328741 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1336436 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1336438 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1337446 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1338479 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1339573 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1340209 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1340360 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1340578 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1340798 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1340800 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1341807 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1342810 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1343818 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1344495 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1344505 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1344829 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1346259 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1347654 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1357662 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1358016 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1363057 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1369808 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1372884 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1385576 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1396244 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1398254 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1399282 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1419556 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1424233 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1456229 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1461292 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1463293 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1465611 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1465680 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1465984 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1466326 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1466895 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1469053 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1470072 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1470551 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1473782 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1474491 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1475222 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1480264 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1492183 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1496996 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1504226 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1505742 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1507542 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1512622 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1517496 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1526496 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1526498 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1532001 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1532327 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1532643 00:33:40.255 Removing: /var/run/dpdk/spdk_pid1533010 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1533016 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1538480 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1539202 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1544371 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1547714 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1554094 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1560622 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1570531 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1579301 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1579305 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1602143 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1602907 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1603587 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1604277 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1605339 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1606024 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1606709 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1607383 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1612321 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1612597 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1619779 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1619874 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1622653 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1630208 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1630213 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1636456 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1638729 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1641214 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1642431 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1644956 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1646411 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1656420 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1656905 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1657450 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1660374 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1661046 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1661540 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1666063 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1666262 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1667861 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1668470 00:33:40.516 Removing: /var/run/dpdk/spdk_pid1668514 00:33:40.516 Clean 00:33:40.516 19:29:46 -- common/autotest_common.sh@1451 -- # return 0 00:33:40.516 19:29:46 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:40.516 19:29:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:40.516 19:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:40.777 19:29:46 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:40.777 19:29:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:40.777 19:29:46 -- common/autotest_common.sh@10 -- # set +x 00:33:40.777 19:29:46 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:40.777 19:29:46 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:40.777 19:29:46 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:40.777 19:29:46 -- spdk/autotest.sh@391 -- # hash lcov 00:33:40.777 19:29:46 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:40.777 19:29:46 -- spdk/autotest.sh@393 -- # hostname 00:33:40.777 19:29:46 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-09 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:41.038 geninfo: WARNING: invalid characters removed from testname! 00:34:07.688 19:30:11 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:07.947 19:30:13 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:09.888 19:30:15 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:11.269 19:30:17 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:12.650 19:30:18 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:14.557 19:30:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:15.941 19:30:21 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:15.941 19:30:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:15.941 19:30:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:15.941 19:30:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:15.941 19:30:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:15.941 19:30:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.941 19:30:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.941 19:30:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.942 19:30:21 -- paths/export.sh@5 -- $ export PATH 00:34:15.942 19:30:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:15.942 19:30:21 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:15.942 19:30:21 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:15.942 19:30:21 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720805421.XXXXXX 00:34:15.942 19:30:21 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720805421.b6ucUA 00:34:15.942 19:30:21 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:15.942 19:30:21 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:15.942 19:30:21 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:15.942 19:30:21 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:15.942 19:30:21 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:15.942 19:30:21 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:15.942 19:30:21 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:15.942 19:30:21 -- common/autotest_common.sh@10 -- $ set +x 00:34:15.942 19:30:22 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:15.942 19:30:22 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:15.942 19:30:22 -- pm/common@17 -- $ local monitor 00:34:15.942 19:30:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.942 19:30:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.942 19:30:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.942 19:30:22 -- pm/common@21 -- $ date +%s 00:34:15.942 19:30:22 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:15.942 19:30:22 -- pm/common@21 -- $ date +%s 00:34:15.942 19:30:22 -- pm/common@25 -- $ sleep 1 00:34:15.942 19:30:22 -- pm/common@21 -- $ date +%s 00:34:15.942 19:30:22 -- pm/common@21 -- $ date +%s 00:34:15.942 19:30:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720805422 00:34:15.942 19:30:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720805422 00:34:15.942 19:30:22 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720805422 00:34:15.942 19:30:22 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720805422 00:34:15.942 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720805422_collect-vmstat.pm.log 00:34:16.203 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720805422_collect-cpu-load.pm.log 00:34:16.203 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720805422_collect-cpu-temp.pm.log 00:34:16.203 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720805422_collect-bmc-pm.bmc.pm.log 00:34:17.145 19:30:23 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:17.145 19:30:23 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:17.145 19:30:23 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:17.145 19:30:23 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:17.145 19:30:23 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:17.145 19:30:23 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:17.145 19:30:23 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:17.145 19:30:23 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:17.145 19:30:23 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:17.145 19:30:23 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:17.145 19:30:23 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:17.145 19:30:23 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:17.145 19:30:23 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:17.145 19:30:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:17.145 19:30:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:17.145 19:30:23 -- pm/common@44 -- $ pid=1681554 00:34:17.145 19:30:23 -- pm/common@50 -- $ kill -TERM 1681554 00:34:17.145 19:30:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:17.145 19:30:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:17.145 19:30:23 -- pm/common@44 -- $ pid=1681555 00:34:17.145 19:30:23 -- pm/common@50 -- $ kill -TERM 1681555 00:34:17.145 19:30:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:17.145 19:30:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:17.145 19:30:23 -- pm/common@44 -- $ pid=1681557 00:34:17.145 19:30:23 -- pm/common@50 -- $ kill -TERM 1681557 00:34:17.145 19:30:23 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:17.145 19:30:23 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:17.145 19:30:23 -- pm/common@44 -- $ pid=1681584 00:34:17.145 19:30:23 -- pm/common@50 -- $ sudo -E kill -TERM 1681584 00:34:17.145 + [[ -n 1090034 ]] 00:34:17.145 + sudo kill 1090034 00:34:17.155 [Pipeline] } 00:34:17.173 [Pipeline] // stage 00:34:17.177 [Pipeline] } 00:34:17.194 [Pipeline] // timeout 00:34:17.199 [Pipeline] } 00:34:17.214 [Pipeline] // catchError 00:34:17.218 [Pipeline] } 00:34:17.233 [Pipeline] // wrap 00:34:17.238 [Pipeline] } 00:34:17.253 [Pipeline] // catchError 00:34:17.260 [Pipeline] stage 00:34:17.261 [Pipeline] { (Epilogue) 00:34:17.272 [Pipeline] catchError 00:34:17.273 [Pipeline] { 00:34:17.285 [Pipeline] echo 00:34:17.286 Cleanup processes 00:34:17.291 [Pipeline] sh 00:34:17.575 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:17.575 1681668 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:17.575 1682107 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:17.592 [Pipeline] sh 00:34:17.880 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:17.880 ++ grep -v 'sudo pgrep' 00:34:17.880 ++ awk '{print $1}' 00:34:17.880 + sudo kill -9 1681668 00:34:17.893 [Pipeline] sh 00:34:18.186 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:28.197 [Pipeline] sh 00:34:28.487 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:28.487 Artifacts sizes are good 00:34:28.501 [Pipeline] archiveArtifacts 00:34:28.509 Archiving artifacts 00:34:28.707 [Pipeline] sh 00:34:29.014 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:29.030 [Pipeline] cleanWs 00:34:29.040 [WS-CLEANUP] Deleting project workspace... 00:34:29.041 [WS-CLEANUP] Deferred wipeout is used... 00:34:29.048 [WS-CLEANUP] done 00:34:29.049 [Pipeline] } 00:34:29.065 [Pipeline] // catchError 00:34:29.076 [Pipeline] sh 00:34:29.358 + logger -p user.info -t JENKINS-CI 00:34:29.368 [Pipeline] } 00:34:29.383 [Pipeline] // stage 00:34:29.388 [Pipeline] } 00:34:29.402 [Pipeline] // node 00:34:29.407 [Pipeline] End of Pipeline 00:34:29.438 Finished: SUCCESS